nvidia-docker

How can I specify the container runtime to use in docker-compose version 3?

断了今生、忘了曾经 提交于 2021-01-27 06:06:25
问题 I'm working on a container that requires the nvidia runtime. I can specify this runtime in a v2.3 docker-compose file like so: version: "2.3" services: my-service: image: "my-image" runtime: "nvidia" ... Running docker-compose up my-service works just fine. I get the nvidia runtime and everything works fine. I've tried this just by changing the "2.3" to "3" and I get the following error when I do docker-compose up my-service : ERROR: The Compose file './docker-compose.yml' is invalid because:

How can I specify the container runtime to use in docker-compose version 3?

旧时模样 提交于 2021-01-27 06:05:31
问题 I'm working on a container that requires the nvidia runtime. I can specify this runtime in a v2.3 docker-compose file like so: version: "2.3" services: my-service: image: "my-image" runtime: "nvidia" ... Running docker-compose up my-service works just fine. I get the nvidia runtime and everything works fine. I've tried this just by changing the "2.3" to "3" and I get the following error when I do docker-compose up my-service : ERROR: The Compose file './docker-compose.yml' is invalid because:

How can I specify the container runtime to use in docker-compose version 3?

六眼飞鱼酱① 提交于 2021-01-27 06:05:06
问题 I'm working on a container that requires the nvidia runtime. I can specify this runtime in a v2.3 docker-compose file like so: version: "2.3" services: my-service: image: "my-image" runtime: "nvidia" ... Running docker-compose up my-service works just fine. I get the nvidia runtime and everything works fine. I've tried this just by changing the "2.3" to "3" and I get the following error when I do docker-compose up my-service : ERROR: The Compose file './docker-compose.yml' is invalid because:

Add nvidia runtime to docker runtimes

浪子不回头ぞ 提交于 2021-01-26 04:39:55
问题 I’m running a virtual vachine on GCP with a tesla GPU. And try to deploy a PyTorch -based app to accelerate it with GPU. I want to make docker use this GPU, have access to it from containers. I managed to install all drivers on host machine, and the app runs fine there, but when I try to run it in docker (based on nvidia/cuda container) pytorch fails: File "/usr/local/lib/python3.6/dist-packages/torch/cuda/__init__.py", line 82, in _check_driver http://www.nvidia.com/Download/index.aspx""")

Add nvidia runtime to docker runtimes

大城市里の小女人 提交于 2021-01-26 04:38:13
问题 I’m running a virtual vachine on GCP with a tesla GPU. And try to deploy a PyTorch -based app to accelerate it with GPU. I want to make docker use this GPU, have access to it from containers. I managed to install all drivers on host machine, and the app runs fine there, but when I try to run it in docker (based on nvidia/cuda container) pytorch fails: File "/usr/local/lib/python3.6/dist-packages/torch/cuda/__init__.py", line 82, in _check_driver http://www.nvidia.com/Download/index.aspx""")

Add nvidia runtime to docker runtimes

為{幸葍}努か 提交于 2021-01-26 04:37:10
问题 I’m running a virtual vachine on GCP with a tesla GPU. And try to deploy a PyTorch -based app to accelerate it with GPU. I want to make docker use this GPU, have access to it from containers. I managed to install all drivers on host machine, and the app runs fine there, but when I try to run it in docker (based on nvidia/cuda container) pytorch fails: File "/usr/local/lib/python3.6/dist-packages/torch/cuda/__init__.py", line 82, in _check_driver http://www.nvidia.com/Download/index.aspx""")

docker build with nvidia runtime

吃可爱长大的小学妹 提交于 2020-12-03 04:23:03
问题 I have a GPU application that does unit-testing during the image building stage. With Docker 19.03, one can specify nvidia runtime with docker run --gpus all but I also need access to the gpus for docker build because I do unit-testing. How can I achieve this goal? For older version of docker that use nvidia-docker2 it was not possible to specifiy runtime during build stage, BUT you can set the default runtime to be nvidia, and docker build works fine that way. Can I do that in Docker 19.03

docker build with nvidia runtime

无人久伴 提交于 2020-12-03 04:18:32
问题 I have a GPU application that does unit-testing during the image building stage. With Docker 19.03, one can specify nvidia runtime with docker run --gpus all but I also need access to the gpus for docker build because I do unit-testing. How can I achieve this goal? For older version of docker that use nvidia-docker2 it was not possible to specifiy runtime during build stage, BUT you can set the default runtime to be nvidia, and docker build works fine that way. Can I do that in Docker 19.03

docker build with nvidia runtime

﹥>﹥吖頭↗ 提交于 2020-12-03 04:17:51
问题 I have a GPU application that does unit-testing during the image building stage. With Docker 19.03, one can specify nvidia runtime with docker run --gpus all but I also need access to the gpus for docker build because I do unit-testing. How can I achieve this goal? For older version of docker that use nvidia-docker2 it was not possible to specifiy runtime during build stage, BUT you can set the default runtime to be nvidia, and docker build works fine that way. Can I do that in Docker 19.03

TensorRT multiple Threads

跟風遠走 提交于 2020-08-10 19:30:08
问题 I am trying to use TensorRt using the python API. I am trying to use it in multiple threads where the Cuda context is used with all the threads (everything works fine in a single thread). I am using docker with tensorrt:20.06-py3 image, and an onnx model, and Nvidia 1070 GPU. The multiple thread approach should be allowed, as mentioned here TensorRT Best Practices. I created the context in the main thread: cuda.init() device = cuda.Device(0) ctx = device.make_context() I tried two methods,