distributed-training

train on multiple devices

北战南征 提交于 2021-01-28 12:04:10
问题 I have know that TensorFlow offer Distributed Training API that can train on multiple devices such as multiple GPUs, CPUs, TPUs, or multiple computers ( workers) Follow this doc : https://www.tensorflow.org/tutorials/distribute/multi_worker_with_keras But I have a question is this any possible way to split the train using Data Parallelism to train across multiple machines ( include mobile devices and computer devices)? I would be really grateful if you have any tutorial/instruction. 回答1: As