Fork of Coqui AI's STT (fka DeepSpeech).
Go to file
Alexandre Lissy 89cd481d52 Add TFLite prod tests
Fixes 
2020-01-02 16:16:58 +01:00
.github Add lock bot config 2018-12-28 19:37:01 -02:00
bin Implements - SWC importer: CSV columns for article and speaker 2019-12-31 16:23:37 +01:00
data Switch to --prune 0 0 1 model and move generation code to a script 2019-11-15 13:28:45 +01:00
doc Use submodule for building contrib examples into docs 2019-12-10 16:25:01 +01:00
examples Remove example code 2019-12-10 16:25:00 +01:00
images Updating Geometry 2019-12-02 11:04:27 +01:00
native_client Re-enable Markdown small README for Bintray hosting 2020-01-02 16:11:19 +01:00
taskcluster Add TFLite prod tests 2020-01-02 16:16:58 +01:00
util Error early if audio sample rate and feature window/step length are invalid 2019-12-04 11:06:16 +01:00
.cardboardlint.yml Update cardboardlint configuration 2019-10-04 13:56:41 +02:00
.compute Bump references to TF 1.13.1 to TF 1.14.0 2019-07-08 18:56:59 +02:00
.gitattributes Remove old versions of decoder binary files 2018-11-08 18:35:42 -02:00
.gitignore Sphinx doc 2019-09-24 18:22:45 +02:00
.gitmodules Use submodule for building contrib examples into docs 2019-12-10 16:25:01 +01:00
.pylintrc Remove alphabet param usage 2019-11-05 09:02:42 +01:00
.readthedocs.yml Re-enable readthedocs.io 2019-09-24 10:55:26 +02:00
.taskcluster.yml Move to TC Community 2019-11-05 07:42:39 +01:00
.travis.yml Add pylint CI 2019-04-10 21:21:26 -03:00
bazel.patch Proper re-use of Bazel cache 2018-01-31 18:50:36 +01:00
build-python-wheel.yml-DISABLED_ENABLE_ME_TO_REBUILD_DURING_PR Move to ARMbian Buster 2019-08-21 22:58:10 +02:00
CODE_OF_CONDUCT.md Add Mozilla Code of Conduct file 2019-03-29 14:58:39 -07:00
CONTRIBUTING.rst Move from Markdown to reStructuredText 2019-10-04 12:07:32 +02:00
DeepSpeech.py Set forget_bias=0 for static RNN implementation 2019-12-20 13:28:46 +01:00
Dockerfile Update Dockerfile 2019-10-23 13:31:31 +11:00
evaluate_tflite.py Remove alphabet param usage 2019-11-05 09:02:42 +01:00
evaluate.py Merge pull request from mozilla/uplift-utf8-fixes 2019-10-25 09:09:48 +00:00
GRAPH_VERSION Embed alphabet directly in model 2019-11-05 09:02:21 +01:00
ISSUE_TEMPLATE.md Create an issue template 2017-11-27 16:40:59 -02:00
LICENSE Added LICENSE 2016-09-20 19:12:29 +02:00
README.rst Remove individual example links from main README 2019-12-10 16:25:01 +01:00
RELEASE.rst Move from Markdown to reStructuredText 2019-10-04 12:07:32 +02:00
requirements_eval_tflite.txt Add TFLite accuracy estimation tool 2019-02-12 13:03:20 +01:00
requirements.txt Tool for bulk transcription 2019-11-18 16:03:03 +01:00
stats.py Computing audio hours at import 2019-05-28 16:46:20 +02:00
SUPPORT.rst Move from Markdown to reStructuredText 2019-10-04 12:07:32 +02:00
TRAINING.rst TRAINING.rst - Include exact command for getting mmap tool 2019-12-31 14:20:25 -08:00
transcribe.py Separate process per file; less log noise 2019-11-20 17:29:13 +01:00
USING.rst Update to specify which package libpthread is in 2019-12-21 12:05:46 +11:00
VERSION Bump version to v0.6.1-alpha.0 2019-12-13 15:57:48 +01:00

Project DeepSpeech
==================


.. image:: https://readthedocs.org/projects/deepspeech/badge/?version=latest
   :target: http://deepspeech.readthedocs.io/?badge=latest
   :alt: Documentation


.. image:: https://community-tc.services.mozilla.com/api/github/v1/repository/mozilla/DeepSpeech/master/badge.svg
   :target: https://community-tc.services.mozilla.com/api/github/v1/repository/mozilla/DeepSpeech/master/latest
   :alt: Task Status


DeepSpeech is an open source Speech-To-Text engine, using a model trained by machine learning techniques based on `Baidu's Deep Speech research paper <https://arxiv.org/abs/1412.5567>`_. Project DeepSpeech uses Google's `TensorFlow <https://www.tensorflow.org/>`_ to make the implementation easier.

**NOTE:** This documentation applies to the **master branch** of DeepSpeech only. If you're using a stable release, you must use the documentation for the corresponding version by using GitHub's branch switcher button above.

To install and use deepspeech all you have to do is:

.. code-block:: bash

   # Create and activate a virtualenv
   virtualenv -p python3 $HOME/tmp/deepspeech-venv/
   source $HOME/tmp/deepspeech-venv/bin/activate

   # Install DeepSpeech
   pip3 install deepspeech

   # Download pre-trained English model and extract
   curl -LO https://github.com/mozilla/DeepSpeech/releases/download/v0.6.0/deepspeech-0.6.0-models.tar.gz
   tar xvf deepspeech-0.6.0-models.tar.gz

   # Download example audio files
   curl -LO https://github.com/mozilla/DeepSpeech/releases/download/v0.6.0/audio-0.6.0.tar.gz
   tar xvf audio-0.6.0.tar.gz

   # Transcribe an audio file
   deepspeech --model deepspeech-0.6.0-models/output_graph.pbmm --lm deepspeech-0.6.0-models/lm.binary --trie deepspeech-0.6.0-models/trie --audio audio/2830-3980-0043.wav

A pre-trained English model is available for use and can be downloaded using `the instructions below <USING.rst#using-a-pre-trained-model>`_. Currently, only 16-bit, 16 kHz, mono-channel WAVE audio files are supported in the Python client. A package with some example audio files is available for download in our `release notes <https://github.com/mozilla/DeepSpeech/releases/latest>`_.

Quicker inference can be performed using a supported NVIDIA GPU on Linux. See the `release notes <https://github.com/mozilla/DeepSpeech/releases/latest>`_ to find which GPUs are supported. To run ``deepspeech`` on a GPU, install the GPU specific package:

.. code-block:: bash

   # Create and activate a virtualenv
   virtualenv -p python3 $HOME/tmp/deepspeech-gpu-venv/
   source $HOME/tmp/deepspeech-gpu-venv/bin/activate

   # Install DeepSpeech CUDA enabled package
   pip3 install deepspeech-gpu

   # Transcribe an audio file.
   deepspeech --model deepspeech-0.6.0-models/output_graph.pbmm --lm deepspeech-0.6.0-models/lm.binary --trie deepspeech-0.6.0-models/trie --audio audio/2830-3980-0043.wav

Please ensure you have the required `CUDA dependencies <USING.rst#cuda-dependency>`_.

See the output of ``deepspeech -h`` for more information on the use of ``deepspeech``. (If you experience problems running ``deepspeech``\ , please check `required runtime dependencies <native_client/README.rst#required-dependencies>`_\ ).

----

**Table of Contents**
  
* `Using a Pre-trained Model <USING.rst#using-a-pre-trained-model>`_

  * `CUDA dependency <USING.rst#cuda-dependency>`_
  * `Getting the pre-trained model <USING.rst#getting-the-pre-trained-model>`_
  * `Model compatibility <USING.rst#model-compatibility>`_
  * `Using the Python package <USING.rst#using-the-python-package>`_
  * `Using the Node.JS package <USING.rst#using-the-nodejs-package>`_
  * `Using the Command Line client <USING.rst#using-the-command-line-client>`_
  * `Installing bindings from source <USING.rst#installing-bindings-from-source>`_
  * `Third party bindings <USING.rst#third-party-bindings>`_


* `Trying out DeepSpeech with examples <examples/README.rst>`_

* `Training your own Model <TRAINING.rst#training-your-own-model>`_

  * `Prerequisites for training a model <TRAINING.rst#prerequisites-for-training-a-model>`_
  * `Getting the training code <TRAINING.rst#getting-the-training-code>`_
  * `Installing Python dependencies <TRAINING.rst#installing-python-dependencies>`_
  * `Recommendations <TRAINING.rst#recommendations>`_
  * `Common Voice training data <TRAINING.rst#common-voice-training-data>`_
  * `Training a model <TRAINING.rst#training-a-model>`_
  * `Checkpointing <TRAINING.rst#checkpointing>`_
  * `Exporting a model for inference <TRAINING.rst#exporting-a-model-for-inference>`_
  * `Exporting a model for TFLite <TRAINING.rst#exporting-a-model-for-tflite>`_
  * `Making a mmap-able model for inference <TRAINING.rst#making-a-mmap-able-model-for-inference>`_
  * `Continuing training from a release model <TRAINING.rst#continuing-training-from-a-release-model>`_
  * `Training with Augmentation <TRAINING.rst#training-with-augmentation>`_

* `Contribution guidelines <CONTRIBUTING.rst>`_
* `Contact/Getting Help <SUPPORT.rst>`_