chainer

How to use CUDA pinned “zero-copy” memory for a memory mapped file?

前提是你 提交于 2019-12-03 16:27:22
Objective/Problem In Python, I am looking for a fast way to read/write data from a memory mapped file to a GPU. In a previous SO overflow post [ Cupy OutOfMemoryError when trying to cupy.load larger dimension .npy files in memory map mode, but np.load works fine ] Where it is mentioned this is possible using CUDA pinned "zero-copy" memory. Furthermore, it seems that this method was developed by this person [ cuda - Zero-copy memory, memory-mapped file ] though that person was working in C++. My previous attempts have been with Cupy, but I am open to any cuda methods. What I have tried so far I

Is it possible to install cupy on google colab?

社会主义新天地 提交于 2019-12-01 06:39:48
I am trying to run chainer with GPU on google colab. This requires cupy installed however I fail to install this properly as it cannot find the cuda environment in my colab vm. Error message as follows... Collecting cupy Downloading cupy-2.4.0.tar.gz (1.7MB) 100% |████████████████████████████████| 1.7MB 740kB/s Complete output from command python setup.py egg_info: cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++ /tmp/tmpds3ikncy/a.cpp:1:10: fatal error: cublas_v2.h: No such file or directory #include ^~~~~~~~~~~~~ compilation terminated. Options:

Is it possible to install cupy on google colab?

无人久伴 提交于 2019-12-01 04:24:29
问题 I am trying to run chainer with GPU on google colab. This requires cupy installed however I fail to install this properly as it cannot find the cuda environment in my colab vm. Error message as follows... Collecting cupy Downloading cupy-2.4.0.tar.gz (1.7MB) 100% |████████████████████████████████| 1.7MB 740kB/s Complete output from command python setup.py egg_info: cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++ /tmp/tmpds3ikncy/a.cpp:1:10: