Merge pull request #2469 from vinhngx/amp-doc

adding amp doc
This commit is contained in:
lissyx 2019-10-28 08:05:50 +01:00 committed by GitHub
commit 3a2eb28983
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

View File

@ -144,6 +144,19 @@ Each dataset has a corresponding importer script in ``bin/`` that can be used to
If you've run the old importers (in ``util/importers/``\ ), they could have removed source files that are needed for the new importers to run. In that case, simply remove the extracted folders and let the importer extract and process the dataset from scratch, and things should work.
Training with automatic mixed precision
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Automatic Mixed Precision (AMP) training on GPU for TensorFlow has been recently [introduced](https://medium.com/tensorflow/automatic-mixed-precision-in-tensorflow-for-faster-ai-training-on-nvidia-gpus-6033234b2540).
Mixed precision training makes use of both FP32 and FP16 precisions where appropriate. FP16 operations can leverage the Tensor cores on NVIDIA GPUs (Volta, Turing or newer architectures) for improved throughput. Mixed precision training also often allows larger batch sizes. DeepSpeech GPU automatic mixed precision training can be enabled via the flag value `--auto_mixed_precision=True`.
```
DeepSpeech.py --train_files ./train.csv --dev_files ./dev.csv --test_files ./test.csv --automatic_mixed_precision=True
```
On a Volta generation V100 GPU, automatic mixed precision speeds up DeepSpeech training and evaluation by ~30%-40%.
Checkpointing
^^^^^^^^^^^^^