Using R's GPU packages on Amazon
Walkthrough:
Step 1: Lookup the AMI-ID ami-87377cee (the one Erik Hazzard built at http://vasir.net/blog/opencl/installing-cuda-opencl-pyopencl-on-aws-ec2) in AWS in the Community AMIs and start up a cg1.4xlarge machine.
Step 2: From command line run: sudo apt-get update
then sudo apt-get install r-base-core
** this will install R2.14.1. If you want to use the latest R version, I would use the guide here: How to install R version 3
Step 3: run R, then use install.packages('OpenCL')
to install OpenCL
Step 4: Have fun learning OpenCL!!
It is really that easy to get it working. Writing the code in a way that OpenCL can use is a bit tricky, but once you get the hang of it utilizing the GPU can be a very powerful tool.
See http://cran.r-project.org/web/packages/OpenCL/OpenCL.pdf for some code snippets to get you started.
With this machine you can also easily use Python with OpenCL, where I would recommend: http://enja.org/category/tutorial/advcl/ if you want to go that route.
My solution may apply on your case. I installed successfully by resolving two errors messages. First error message I resolved comes from the source file, rpudist.cu
(in src
folder), as the error message suggests it is in line 159. You can use a text editor to read the source file and find this code, (dev = 1.)
.
rpudist.cu(159): warning: use of "=" where "==" may have been intended
So I changed it to (dev == 1.)
, the error message was then gone.
The second error message, indeed as you have found out, is about -Wl
. I think this may be more critical. It seems to conflict with another linker option -Xlinker
, which is used in the file, Makefile.in
in the src folder of the rpud folder (if you extract the tarball rpud_0.0.2.tar.gz
).
LD_PARAMS := -Xlinker "@R_LIB@ @RPATHFLAG@"
As explained in the gcc doc and I replicate here, both "Pass option as an option to the linker". So I think they passed options after them to ld
to link with the files nvcc
has compiled. In the following code, nvcc
calls both -Xlinker
, and -Wl
/usr/local/cuda/bin/nvcc -shared -Xlinker "-Wl,--export-dynamic-fopenmp -L/usr/lib/R/lib -lR -lpcre -llzma -lbz2 -lrt -ldl -lm -Wl,-rpath,/usr/local/cuda/lib64" -L/usr/local/cuda/lib64 -lcublas -lcuda rpud.o rpudist.o -o rpud.so
Thus, the not-very-elegant workaround is to make nvcc only use -Xlinker
. To sum up, except changing the (maybe not critical) file, rpudist.cu
, the solution is to alter the contents in the files (1) Makefile.in
(in src
folder) and (2) configure
(in top-level folder).
Changing the line 10 in original Makefile.in
from
LD_PARAMS := -Xlinker "@R_LIB@ @RPATHFLAG@"
to:
LD_PARAMS := -Xlinker @R_LIB@ -Xlinker @RPATHFLAG@
Then change the line 1786 in the original configure
from,
R_LIB=`"${R_HOME}/bin/R" CMD config --ldflags`
to
R_LIB="-E -fopenmp -L/usr/lib/R/lib -lR -lpcre -llzma -lbz2 -lz -lrt -ldl -lm"
and line 1797 from,
RPATHFLAG="-Wl,-rpath,${CUDA_HOME}${CUDA_LIB_DIR}"
to
RPATHFLAG="-rpath=${CUDA_HOME}${CUDA_LIB_DIR}"
Finally, just follow Chi Yau's installation instruction
3) Expand the package in a temporary folder:
tar xf rpud_<version>.tar.gz
4) Run configure in rpud:
cd rpud./configurecd ..
5) Then enter the following:
R CMD INSTALL rpud
HTH