I have a standard tensorflow Estimator with some model and want to run it on multiple GPUs instead of just one. How can this be done using data parallelism?
I searched
You can find an example using tf.distribute.MirroredStrategy and tf.estimator.train_and_evaluate here.
tf.distribute.MirroredStrategy
tf.estimator.train_and_evaluate