Add note on model input data considerations and reference training/scorer docs

This commit is contained in:
Reuben Morais 2020-07-07 19:59:41 +02:00
parent 18ea7391f3
commit daf28086e5
2 changed files with 13 additions and 0 deletions

View File

@ -1,3 +1,5 @@
.. _training-docs:
Training Your Own Model Training Your Own Model
======================= =======================
@ -232,6 +234,8 @@ If your own data uses the *extact* same alphabet as the English release model (i
N.B. - If you have access to a pre-trained model which uses UTF-8 bytes at the output layer you can always fine-tune, because any alphabet should be encodable as UTF-8. N.B. - If you have access to a pre-trained model which uses UTF-8 bytes at the output layer you can always fine-tune, because any alphabet should be encodable as UTF-8.
.. _training-fine-tuning:
Fine-Tuning (same alphabet) Fine-Tuning (same alphabet)
^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^

View File

@ -54,6 +54,15 @@ There are several pre-trained model files available in official releases. Files
Finally, the pre-trained model files also include files ending in ``.scorer``. These are external scorers (language models) that are used at inference time in conjunction with an acoustic model (``.pbmm`` or ``.tflite`` file) to produce transcriptions. We also provide further documentation on :ref:`the decoding process <decoder-docs>` and :ref:`how language models are generated <scorer-scripts>`. Finally, the pre-trained model files also include files ending in ``.scorer``. These are external scorers (language models) that are used at inference time in conjunction with an acoustic model (``.pbmm`` or ``.tflite`` file) to produce transcriptions. We also provide further documentation on :ref:`the decoding process <decoder-docs>` and :ref:`how language models are generated <scorer-scripts>`.
Important considerations on model inputs
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The release notes include detailed information on how the released models were trained/constructed. Important considerations for users include the characteristics of the used training data and whether they match your intended use case. For acoustic models, an important characteristic is the demographic distribution of speakers. For language models, the sources of text used in their construction. If the data used for training the models does not align with your intended use case, it may be necessary to adapt or train new models in order to get good accuracy in your transcription results.
The process for training an acoustic model is described in :ref:`training-docs`. In particular, fine tuning a release model using your own data can be a good way to leverage relatively smaller amounts of data that would not be sufficient for training a new model from scratch. See the :ref:`fine tuning and transfer learning sections <training-fine-tuning>` for more information. :ref:`Data augmentation <training-data-augmentation>` can also be a good way to increase the value of smaller training sets.
Creating your own external scorer from text data is another way that you can adapt the model to your specific needs. The process and tools used to generate an external scorer package are described in :ref:`scorer-scripts` and an overview of how the external scorer is used by DeepSpeech to perform inference is available in :ref:`decoder-docs`. Generating a smaller scorer from a single purpose text dataset is a quick process and can bring significant accuracy improvements, specially for more constrained, limited vocabulary applications.
Model compatibility Model compatibility
^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^