Distributed Tensorflow: CreateSession still waiting

断了今生、忘了曾经 提交于 2019-11-30 16:20:35

By default, a distributed TensorFlow session will attempt to connect to all servers named in the tf.train.ClusterSpec, and will block until they respond. This provides a useful barrier that ensures that all workers have become ready to receive computation requests before returning control to the user. This barrier happens before the MonitoredTrainingSession code that waits for the chief to initialize variables.

If you don't want your session to wait on all servers (e.g. just wait on tasks in "/job:ps" and not the other tasks in "/job:worker", which is a common between-graph deployment strategy), the easiest option is to specify a "device filter" when you create your session. The device filter is a whitelist of (partial) device specifications that determines which tasks a tf.Session will contact at startup. For example, the mnist_replica.py test specifies a device filter as part of the tf.ConfigProto that is used to configure the session.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!