The SC Conference in Austin made me read up on compiler developments concerning CUDA. Two related things gained traction in the last couple of weeks. One is CUDA code compilation using LLVM, but having still the NVIDIA CUDA driver and runtime as a backend; the other is a full Open-Source
CUDA with LLVM
Since a few weeks, you can use LLVM / Clang to compile CUDA code. How it’s done is written in a document in the LLVM code repository (fix link, introduced with this commit). I haven’t tried it yet, but it looks quite straight-forward. There are still more optimizations in LLVM going on to better include CUDA.
Apparently the same people from Google sewing CUDA into LLVM are also developing
gpucc, an Open-Source CUDA compiler.
Surely, the compiler is LLVM-based and from the last LLVM developers’ meeting comes also the only in-depth info on
gpucc: A talk by Jingyue Wu (video, slides). I like the optimizations done by the compiler, which are also already included into the public LLVM part from above (the whitepapers for reference: »Straight-line Scalar Optimizations« and »Memory Space Inference for NVPTX Backend«, both by Wu)!
It looks quite interesting. Their time line foresees a publication next year (»Q1 2016«).
(Sidenote: AMD is working on a tool converting CUDA to a C++ programming model, which can then be translated to CUDA or AMD’s HCC compiler; it’s like CUDA support for AMD through a back door.)