forking a python process after loading TensorFlow

爱⌒轻易说出口 提交于 2020-01-14 01:46:41

问题


tf.Session() is not fork safe which means that the behavior of the system after forking a process while TensorFlow is loaded into the memory is unknown.

is there any work around for sharing multiple devices (on a single machine), between multiple processes?


回答1:


The standard way to share a TensorFlow runtime between multiple processes is to use the distributed TensorFlow support, which also works on a single machine.

In one process, you can start a server by running the following code:

import tensorflow as tf
server = tf.train.Server.create_local_server()
print server.target  # for other processes to connect
server.join()

This process will own all of the devices on the machine, by default.

In the other processes, you can create tf.Session objects that connect to the server:

sess = tf.Session("grpc://localhost:...")  # Use value of `server.target`.

These sessions can be used just like in-process sessions.



来源:https://stackoverflow.com/questions/37874838/forking-a-python-process-after-loading-tensorflow

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!