TensorRT multiple Threads

跟風遠走 提交于 2020-08-10 19:30:08

问题


I am trying to use TensorRt using the python API. I am trying to use it in multiple threads where the Cuda context is used with all the threads (everything works fine in a single thread). I am using docker with tensorrt:20.06-py3 image, and an onnx model, and Nvidia 1070 GPU.

The multiple thread approach should be allowed, as mentioned here TensorRT Best Practices.

I created the context in the main thread:

cuda.init()
device = cuda.Device(0)
ctx = device.make_context()

I tried two methods, first to build the engine in the main thread and use it in the execution thread. This case gives this error.

[TensorRT] ERROR: ../rtSafe/cuda/caskConvolutionRunner.cpp (373) - Cask Error in checkCaskExecError<false>: 10 (Cask Convolution execution)
[TensorRT] ERROR: FAILED_EXECUTION: std::exception

Second, I tried to build the model in the thread it gives me this error:

pycuda._driver.LogicError: explicit_context_dependent failed: invalid device context - no currently active context?

The error appears when I call 'cuda.Stream()'

I am sure that I can run multiple Cuda streams in parallel under the same Cuda context, but I don't know how to do it.


回答1:


I found a solution. The idea is to create a normal global ctx = device.make_context() Then in each execution thread do a:

ctx.push()
---
Execute Inference Code
---
ctx.pop()

The link for the source and full sample is here



来源:https://stackoverflow.com/questions/62719277/tensorrt-multiple-threads

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!