Parallel processes in distributed tensorflow
I have neural network in tensorflow with trained parameters, it is "policy" for agent. Network is being updated in training loop in main tensorflow session in core program. In the end of each training cycle I need to pass this network to few parallel processes ("workers"), which will use it for collecting samples from interactions of agent's policy with environment. I need to do it in parallel, because simulating environment takes most of the time and runs only single-core. So, few parallel sampling processes are needed. I am struggling how to structure this in distributed tensorflow. What I