Fix documentation for multi-worker training.

PiperOrigin-RevId: 217381397
This commit is contained in:
Ayush Dubey 2018-10-16 14:01:38 -07:00 committed by TensorFlower Gardener
parent f2d88e5ad4
commit d4eb6ab275

View File

@ -190,7 +190,7 @@ in the input function gives a solid boost in performance. When using
For multi-worker training, no code change is required to the `Estimator` code.
You can run the same model code for all tasks in your cluster including
parameter servers and the evaluator. But you need to use
`tf.estimator.train_and_evaluator`, explicitly specify `num_gpus_per_workers`
`tf.estimator.train_and_evaluate`, explicitly specify `num_gpus_per_workers`
for your strategy object, and set "TF\_CONFIG" environment variables for each
binary running in your cluster. We'll provide a Kubernetes template in the
[tensorflow/ecosystem](https://github.com/tensorflow/ecosystem) repo which sets