cudnn

Win10+Python 3.6环境下cuda 9.1+cuDNN 7.1+Tensorflow 1.7+keras安装

允我心安 提交于 2019-12-03 08:24:29
安装环境: windows 10 64bit python 3.6 安装以下步骤进行安装: 更新GPU驱动—>安装cuda—>安装cuDNN—>安装Tensorflow—>安装keras 1、更新GPU驱动 首先查看机器的GPU型号,查看其是否支持cuda,在Nvidia官网下载对应的最新驱动进行跟新。这一步应该很简单,就不多说了。 2、安装cuda Tensorflow已经更新到1.7版本了,官网上说支持最新的cuda 9.X和cuDNN 7.X(结果被坑,后期详述),在Nvidia官网上下载最新的cuda和cuDNN。 cuda 9.1 下载地址: https://developer.nvidia.com/cuda-downloads cuDNN 7.1.2 下载地址: https://developer.nvidia.com/rdp/cudnn-download 注意:下载cuDNN需要注册用户,同时下载cuDNN版本时要对应cuda下载的版本,否则运行程序的时候会报错。这里选择v7.1.2 for cuda 9.1的win10版本。 安装包下载好之后,安装cuda(需要管理员权限),按照安装程序一步一步进行下去即可。安装完成后,在cmd输入nvcc -V查看cuda是否安装成功。 3、安装cuDNN 解压缩下载的cuDNN安装包,得到以下三个文件夹 将其复制在C:

nvcc fatal : Unsupported gpu architecture 'compute_20' while cuda 9.1+caffe+openCV 3.4.0 is installed

巧了我就是萌 提交于 2019-12-03 08:24:18
I have installed CUDA 9.1+cudnn-9.1+opencv 3.4.0+caffe . When I tried to run make all -j8 in caffe directory, this error occurred: nvcc fatal : Unsupported gpu architecture 'compute_20' I have tried to run: "cmake -D CMAKE_BUILD_TYPE=RELEASE -D CUDA_GENERATION=Kepler .." but it didn't work. Shai Try manually edit Makefile.config to remove compute_2* architectures from these lines (comments explain why): # CUDA architecture setting: going with all of them. # For CUDA < 6.0, comment the *_50 through *_61 lines for compatibility. # For CUDA < 8.0, comment the *_60 and *_61 lines for compatibility

First tf.session.run() performs dramatically different from later runs. Why?

北战南征 提交于 2019-12-03 07:44:15
Here's an example to clarify what I mean: First session.run(): First run of a TensorFlow session Later session.run(): Later runs of a TensorFlow session I understand TensorFlow is doing some initialization here, but I'd like to know where in the source this manifests. This occurs on CPU as well as GPU, but the effect is more prominent on GPU. For example, in the case of a explicit Conv2D operation, the first run has a much larger quantity of Conv2D operations in the GPU stream. In fact, if I change the input size of the Conv2D, it can go from tens to hundreds of stream Conv2D operations. In

How to enable Keras with Theano to utilize multiple GPUs

爷,独闯天下 提交于 2019-12-03 06:18:01
Setup: Using a Amazon Linux system with a Nvidia GPU I'm using Keras 1.0.1 Running Theano v0.8.2 backend Using CUDA and CuDNN THEANO_FLAGS="device=gpu,floatX=float32,lib.cnmem=1" Everything works fine, but I run out of video memory on large models when I increase the batch size to speed up training. I figure moving to a 4 GPU system would in theory either improve total memory available or allow smaller batches to build faster, but observing the the nvidia stats, I can see only one GPU is used by default: +------------------------------------------------------+ | NVIDIA-SMI 361.42 Driver

TensorFlow: how to log GPU memory (VRAM) utilization?

感情迁移 提交于 2019-12-03 05:55:00
问题 TensorFlow always (pre-)allocates all free memory (VRAM) on my graphics card, which is ok since I want my simulations to run as fast as possible on my workstation. However, I would like to log how much memory (in sum) TensorFlow really uses. Additionally it would be really nice, if I could also log how much memory single tensors use. This information is important to measure and compare the memory size that different ML/AI architectures need. Any tips? 回答1: Update, can use TensorFlow ops to

Tensorflow not running on GPU

南楼画角 提交于 2019-12-03 05:40:28
问题 I have aldready spent a considerable of time digging around on stack overflow and else looking for the answer, but couldn't find anything Hi all, I am running Tensorflow with Keras on top. I am 90% sure I installed Tensorflow GPU, is there any way to check which install I did? I was trying to do run some CNN models from Jupyter notebook and I noticed that Keras was running the model on the CPU (checked task manager, CPU was at 100%). I tried running this code from the tensorflow website: #

Undefined symbols for architecture x86_64: for caffe build

匿名 (未验证) 提交于 2019-12-03 02:49:01
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: I got this error for caffe build. How can I fix it? I'm using Mac OSX Yosemite 10.10.1. CONSOLE LOG Machida-no-MacBook-Air:caffe machidahiroaki$ /usr/bin/clang++ -shared -o .build_release/lib/libcaffe.so .build_release/src/caffe/proto/caffe.pb.o .build_release/src/caffe/proto/caffe_pretty_print.pb.o .build_release/src/caffe/blob.o .build_release/src/caffe/common.o .build_release/src/caffe/data_transformer.o .build_release/src/caffe/dataset_factory.o .build_release/src/caffe/internal_thread.o .build_release/src/caffe/layer_factory.o .build

how to setup cuDnn with theano on Windows 7 64 bit

匿名 (未验证) 提交于 2019-12-03 01:48:02
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: I have installed Theano framework and enabled CUDA on my machine, however when I "import theano" in my python console, I got the following message: >>> import theano Using gpu device 0: GeForce GTX 950 (CNMeM is disabled, CuDNN not available) Now that "CuDNN not available", I downloaded cuDnn from Nvidia website. I also updated 'path' in environment, and added 'optimizer_including=cudnn' in '.theanorc.txt' config file. Then, I tried again, but failed, with: >>> import theano Using gpu device 0: GeForce GTX 950 (CNMeM is disabled, CuDNN not

no cudnn 6.0 for cuda toolkit 9.0

匿名 (未验证) 提交于 2019-12-03 01:40:02
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: trying to install tensorflow gpu on windows 10 since three days. https://www.tensorflow.org/install/install_windows#requirements_to_run_tensorflow_with_gpu_support says : If you are installing TensorFlow with GPU support using one of the mechanisms described in this guide, then the following NVIDIA software must be installed on your system: The NVIDIA drivers associated with CUDA Toolkit 9.0. cuDNN v6.0. For details, see NVIDIA's documentation. Note that cuDNN is typically installed in a different location from the other CUDA DLLs. Ensure

TensorFlow安装(Ubuntu18.04+Anaconda3+CUDA9.0+cuDNN7.1+TensorFlow1.8.0+Pycharm)

匿名 (未验证) 提交于 2019-12-03 00:43:02
*/ /*--> */ /*--> */ /*--> */ /*--> */ 1. 安装 pip (1) 安装 sudo apt-get install python3-pip python3-dev (2) 查看 pip 是否安装成功 pip3 -V (3) 切换国内源 Linux 下,修改 ~/.pip/pip.conf ( 没有就创建一个 ) , 修改 index-url 至 tuna ,内容如下: [global] index-url = https://pypi.tuna.tsinghua.edu.cn/simple 2. 安装 Anaconda (1) 下载安装包 国内建议从 https://mirrors.tuna.tsinghua.edu.cn 下载 (2) 在下载目录执行以下命令 bash Anaconda3-5.2.0-Linux-x86_64.sh (3) 一路同意,默认安装路径为 /home/rock/anaconda3 (4) 检查是否安装成功 conda --version (作用:查看当前 Anaconda 的版本) (5) 切换国内清华源 conda config -- add channels https://mirrors .tuna.tsinghua.edu.cn /anaconda/pkgs/free/ conda config -- add