It seems that tf.train.replica_device_setter
doesn\'t allow specify gpu which work with.
What I want to do is like below:
with tf.devi
If your parameters are not sharded, you could do it with a simplified version of replica_device_setter
like below:
def assign_to_device(worker=0, gpu=0, ps_device="/job:ps/task:0/cpu:0"):
def _assign(op):
node_def = op if isinstance(op, tf.NodeDef) else op.node_def
if node_def.op == "Variable":
return ps_device
else:
return "/job:worker/task:%d/gpu:%d" % (worker, gpu)
return _assign
with tf.device(assign_to_device(1, 2)):
# this op goes on worker 1 gpu 2
my_op = tf.ones(())