What is the reason to use parameter server in distributed tensorflow learning?

可紊 提交于 2021-02-05 13:20:27

问题


Short version: can't we store variables in one of the workers and not use parameter servers?

Long version: I want to implement synchronous distributed learning of neural network in tensorflow. I want each worker to have a full copy of the model during training.

I've read distributed tensorflow tutorial and code of distributed training imagenet and didn't get why do we need parameter servers.

I see that they are used for storing values of variables and replica_device_setter takes care that variables are evenly distributed between parameter servers (probably it does something more, I wasn't able to fully understand the code).

The question is: why don't we use one of the workers to store variables? Will I achieve that if I use

with tf.device('/job:worker/task:0/cpu:0'):

instead of

with tf.device(tf.train.replica_device_setter(cluster=cluster_spec)):

for Variaibles? If that works is there downside comparing to solution with parameter servers?


回答1:


Using parameter server can give you better network utilization, and lets you scale your models to more machines.

A concrete example, suppose you have 250M parameters, it takes 1 second to compute gradient on each worker, and there are 10 workers. This means that each worker has to send/receive 1 GB of data to 9 other workers every second, which needs 72 Gbps full duplex network capacity on each worker, which is not practical.

More realistically you could have 10 Gbps network capacity per worker. You prevent network bottlenecks by using parameter server split over 8 machines. Each worker machine communicates with each parameter machine for 1/8th of parameters.




回答2:


Another possibility is to use a distributed version of TensorFlow, which automatically handles the data distribution and execution on multiple nodes by using MPI in the backend.

We have recently developed one such version at MaTEx: https://github.com/matex-org/matex, and a paper describing https://arxiv.org/abs/1704.04560

It does synchronous training and provides several parallel dataset reader format.

We will be happy to help you if you need more help!



来源:https://stackoverflow.com/questions/39559183/what-is-the-reason-to-use-parameter-server-in-distributed-tensorflow-learning

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!