Merge pull request #2395 from lissyx/md-to-rst
Move from Markdown to reStructuredText
This commit is contained in:
commit
30e0da9029
|
@ -1,2 +1,3 @@
|
|||
linters:
|
||||
- pylint:
|
||||
filefilter: ['+ *.py', '+ bin/*.py']
|
||||
|
|
|
@ -0,0 +1,53 @@
|
|||
Contribution guidelines
|
||||
=======================
|
||||
|
||||
This repository is governed by Mozilla's code of conduct and etiquette guidelines. For more details, please read the `Mozilla Community Participation Guidelines <https://www.mozilla.org/about/governance/policies/participation/>`_.
|
||||
|
||||
Before making a Pull Request, check your changes for basic mistakes and style problems by using a linter. We have cardboardlinter setup in this repository, so for example, if you've made some changes and would like to run the linter on just the changed code, you can use the follow command:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
pip install pylint cardboardlint
|
||||
cardboardlinter --refspec master
|
||||
|
||||
This will compare the code against master and run the linter on all the changes. We plan to introduce more linter checks (e.g. for C++) in the future. To run it automatically as a git pre-commit hook, do the following:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
cat <<\EOF > .git/hooks/pre-commit
|
||||
#!/bin/bash
|
||||
if [ ! -x "$(command -v cardboardlinter)" ]; then
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# First, stash index and work dir, keeping only the
|
||||
# to-be-committed changes in the working directory.
|
||||
echo "Stashing working tree changes..." 1>&2
|
||||
old_stash=$(git rev-parse -q --verify refs/stash)
|
||||
git stash save -q --keep-index
|
||||
new_stash=$(git rev-parse -q --verify refs/stash)
|
||||
|
||||
# If there were no changes (e.g., `--amend` or `--allow-empty`)
|
||||
# then nothing was stashed, and we should skip everything,
|
||||
# including the tests themselves. (Presumably the tests passed
|
||||
# on the previous commit, so there is no need to re-run them.)
|
||||
if [ "$old_stash" = "$new_stash" ]; then
|
||||
echo "No changes, skipping lint." 1>&2
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Run tests
|
||||
cardboardlinter --refspec HEAD -n auto
|
||||
status=$?
|
||||
|
||||
# Restore changes
|
||||
echo "Restoring working tree changes..." 1>&2
|
||||
git reset --hard -q && git stash apply --index -q && git stash drop -q
|
||||
|
||||
# Exit with status from test-run: nonzero prevents commit
|
||||
exit $status
|
||||
EOF
|
||||
chmod +x .git/hooks/pre-commit
|
||||
|
||||
This will run the linters on just the changes made in your commit.
|
||||
|
514
README.md
514
README.md
|
@ -1,514 +0,0 @@
|
|||
# Project DeepSpeech
|
||||
|
||||
[![Documentation](https://readthedocs.org/projects/deepspeech/badge/?version=latest)](http://deepspeech.readthedocs.io/?badge=latest)
|
||||
[![Task Status](https://github.taskcluster.net/v1/repository/mozilla/DeepSpeech/master/badge.svg)](https://github.taskcluster.net/v1/repository/mozilla/DeepSpeech/master/latest)
|
||||
|
||||
DeepSpeech is an open source Speech-To-Text engine, using a model trained by machine learning techniques based on [Baidu's Deep Speech research paper](https://arxiv.org/abs/1412.5567). Project DeepSpeech uses Google's [TensorFlow](https://www.tensorflow.org/) to make the implementation easier.
|
||||
|
||||
To install and use deepspeech all you have to do is:
|
||||
|
||||
```bash
|
||||
# Create and activate a virtualenv
|
||||
virtualenv -p python3 $HOME/tmp/deepspeech-venv/
|
||||
source $HOME/tmp/deepspeech-venv/bin/activate
|
||||
|
||||
# Install DeepSpeech
|
||||
pip3 install deepspeech
|
||||
|
||||
# Download pre-trained English model and extract
|
||||
curl -LO https://github.com/mozilla/DeepSpeech/releases/download/v0.5.1/deepspeech-0.5.1-models.tar.gz
|
||||
tar xvf deepspeech-0.5.1-models.tar.gz
|
||||
|
||||
# Download example audio files
|
||||
curl -LO https://github.com/mozilla/DeepSpeech/releases/download/v0.5.1/audio-0.5.1.tar.gz
|
||||
tar xvf audio-0.5.1.tar.gz
|
||||
|
||||
# Transcribe an audio file
|
||||
deepspeech --model deepspeech-0.5.1-models/output_graph.pbmm --alphabet deepspeech-0.5.1-models/alphabet.txt --lm deepspeech-0.5.1-models/lm.binary --trie deepspeech-0.5.1-models/trie --audio audio/2830-3980-0043.wav
|
||||
```
|
||||
|
||||
A pre-trained English model is available for use and can be downloaded using [the instructions below](#using-a-pre-trained-model). Currently, only 16-bit, 16 kHz, mono-channel WAVE audio files are supported in the Python client. A package with some example audio files is available for download in our [release notes](https://github.com/mozilla/DeepSpeech/releases/latest).
|
||||
|
||||
Quicker inference can be performed using a supported NVIDIA GPU on Linux. See the [release notes](https://github.com/mozilla/DeepSpeech/releases/latest) to find which GPUs are supported. To run `deepspeech` on a GPU, install the GPU specific package:
|
||||
|
||||
```bash
|
||||
# Create and activate a virtualenv
|
||||
virtualenv -p python3 $HOME/tmp/deepspeech-gpu-venv/
|
||||
source $HOME/tmp/deepspeech-gpu-venv/bin/activate
|
||||
|
||||
# Install DeepSpeech CUDA enabled package
|
||||
pip3 install deepspeech-gpu
|
||||
|
||||
# Transcribe an audio file.
|
||||
deepspeech --model deepspeech-0.5.1-models/output_graph.pbmm --alphabet deepspeech-0.5.1-models/alphabet.txt --lm deepspeech-0.5.1-models/lm.binary --trie deepspeech-0.5.1-models/trie --audio audio/2830-3980-0043.wav
|
||||
```
|
||||
|
||||
Please ensure you have the required [CUDA dependencies](#cuda-dependency).
|
||||
|
||||
See the output of `deepspeech -h` for more information on the use of `deepspeech`. (If you experience problems running `deepspeech`, please check [required runtime dependencies](native_client/README.md#required-dependencies)).
|
||||
|
||||
---
|
||||
|
||||
**Table of Contents**
|
||||
|
||||
- [Using a Pre-trained Model](#using-a-pre-trained-model)
|
||||
- [CUDA dependency](#cuda-dependency)
|
||||
- [Getting the pre-trained model](#getting-the-pre-trained-model)
|
||||
- [Model compatibility](#model-compatibility)
|
||||
- [Using the Python package](#using-the-python-package)
|
||||
- [Using the Node.JS package](#using-the-nodejs-package)
|
||||
- [Using the Command Line client](#using-the-command-line-client)
|
||||
- [Installing bindings from source](#installing-bindings-from-source)
|
||||
- [Third party bindings](#third-party-bindings)
|
||||
- [Training your own Model](#training-your-own-model)
|
||||
- [Prerequisites for training a model](#prerequisites-for-training-a-model)
|
||||
- [Getting the training code](#getting-the-training-code)
|
||||
- [Installing Python dependencies](#installing-python-dependencies)
|
||||
- [Recommendations](#recommendations)
|
||||
- [Common Voice training data](#common-voice-training-data)
|
||||
- [Training a model](#training-a-model)
|
||||
- [Checkpointing](#checkpointing)
|
||||
- [Exporting a model for inference](#exporting-a-model-for-inference)
|
||||
- [Exporting a model for TFLite](#exporting-a-model-for-tflite)
|
||||
- [Making a mmap-able model for inference](#making-a-mmap-able-model-for-inference)
|
||||
- [Continuing training from a release model](#continuing-training-from-a-release-model)
|
||||
- [Training with Augmentation](#training-with-augmentation)
|
||||
- [Contribution guidelines](#contribution-guidelines)
|
||||
- [Contact/Getting Help](#contactgetting-help)
|
||||
|
||||
## Using a Pre-trained Model
|
||||
|
||||
Inference using a DeepSpeech pre-trained model can be done with a client/language binding package. We have four clients/language bindings in this repository, listed below, and also a few community-maintained clients/language bindings in other repositories, listed [further down in this README](#third-party-bindings).
|
||||
|
||||
- [The Python package/language binding](#using-the-python-package)
|
||||
- [The Node.JS package/language binding](#using-the-nodejs-package)
|
||||
- [The Command-Line client](#using-the-command-line-client)
|
||||
- [The .NET client/language binding](native_client/dotnet/README.md)
|
||||
|
||||
Running `deepspeech` might, see below, require some runtime dependencies to be already installed on your system:
|
||||
|
||||
* sox - The Python and Node.JS clients use SoX to resample files to 16kHz.
|
||||
* libgomp1 - libsox (statically linked into the clients) depends on OpenMP. Some people have had to install this manually.
|
||||
* libstdc++ - Standard C++ Library implementation. Some people have had to install this manually.
|
||||
* libpthread - On Linux, some people have had to install libpthread manually.
|
||||
|
||||
Please refer to your system's documentation on how to install these dependencies.
|
||||
|
||||
|
||||
### CUDA dependency
|
||||
|
||||
The GPU capable builds (Python, NodeJS, C++, etc) depend on the same CUDA runtime as upstream TensorFlow. Currently with TensorFlow 1.14 it depends on CUDA 10.0 and CuDNN v7.5. [See the TensorFlow documentation](https://www.tensorflow.org/install/gpu).
|
||||
|
||||
### Getting the pre-trained model
|
||||
|
||||
If you want to use the pre-trained English model for performing speech-to-text, you can download it (along with other important inference material) from the DeepSpeech [releases page](https://github.com/mozilla/DeepSpeech/releases). Alternatively, you can run the following command to download and unzip the model files in your current directory:
|
||||
|
||||
```bash
|
||||
wget https://github.com/mozilla/DeepSpeech/releases/download/v0.5.1/deepspeech-0.5.1-models.tar.gz
|
||||
tar xvfz deepspeech-0.5.1-models.tar.gz
|
||||
```
|
||||
|
||||
### Model compatibility
|
||||
|
||||
DeepSpeech models are versioned to keep you from trying to use an incompatible graph with a newer client after a breaking change was made to the code. If you get an error saying your model file version is too old for the client, you should either upgrade to a newer model release, re-export your model from the checkpoint using a newer version of the code, or downgrade your client if you need to use the old model and can't re-export it.
|
||||
|
||||
### Using the Python package
|
||||
|
||||
Pre-built binaries which can be used for performing inference with a trained model can be installed with `pip3`. You can then use the `deepspeech` binary to do speech-to-text on an audio file:
|
||||
|
||||
For the Python bindings, it is highly recommended that you perform the installation within a Python 3.5 or later virtual environment. You can find more information about those in [this documentation](http://docs.python-guide.org/en/latest/dev/virtualenvs/).
|
||||
|
||||
We will continue under the assumption that you already have your system properly setup to create new virtual environments.
|
||||
|
||||
#### Create a DeepSpeech virtual environment
|
||||
|
||||
In creating a virtual environment you will create a directory containing a `python3` binary and everything needed to run deepspeech. You can use whatever directory you want. For the purpose of the documentation, we will rely on `$HOME/tmp/deepspeech-venv`. You can create it using this command:
|
||||
|
||||
```
|
||||
$ virtualenv -p python3 $HOME/tmp/deepspeech-venv/
|
||||
```
|
||||
|
||||
Once this command completes successfully, the environment will be ready to be activated.
|
||||
|
||||
#### Activating the environment
|
||||
|
||||
Each time you need to work with DeepSpeech, you have to *activate* this virtual environment. This is done with this simple command:
|
||||
|
||||
```
|
||||
$ source $HOME/tmp/deepspeech-venv/bin/activate
|
||||
```
|
||||
|
||||
#### Installing DeepSpeech Python bindings
|
||||
|
||||
Once your environment has been set-up and loaded, you can use `pip3` to manage packages locally. On a fresh setup of the `virtualenv`, you will have to install the DeepSpeech wheel. You can check if `deepspeech` is already installed with `pip3 list`.
|
||||
|
||||
To perform the installation, just use `pip3` as such:
|
||||
|
||||
```
|
||||
$ pip3 install deepspeech
|
||||
```
|
||||
|
||||
If `deepspeech` is already installed, you can update it as such:
|
||||
|
||||
```
|
||||
$ pip3 install --upgrade deepspeech
|
||||
```
|
||||
|
||||
Alternatively, if you have a supported NVIDIA GPU on Linux, you can install the GPU specific package as follows:
|
||||
|
||||
```
|
||||
$ pip3 install deepspeech-gpu
|
||||
```
|
||||
|
||||
See the [release notes](https://github.com/mozilla/DeepSpeech/releases) to find which GPUs are supported. Please ensure you have the required [CUDA dependency](#cuda-dependency).
|
||||
|
||||
You can update `deepspeech-gpu` as follows:
|
||||
|
||||
```
|
||||
$ pip3 install --upgrade deepspeech-gpu
|
||||
```
|
||||
|
||||
In both cases, `pip3` should take care of installing all the required dependencies. After installation has finished, you should be able to call `deepspeech` from the command-line.
|
||||
|
||||
|
||||
Note: the following command assumes you [downloaded the pre-trained model](#getting-the-pre-trained-model).
|
||||
|
||||
```bash
|
||||
deepspeech --model models/output_graph.pbmm --alphabet models/alphabet.txt --lm models/lm.binary --trie models/trie --audio my_audio_file.wav
|
||||
```
|
||||
|
||||
The arguments `--lm` and `--trie` are optional, and represent a language model.
|
||||
|
||||
See [client.py](native_client/python/client.py) for an example of how to use the package programatically.
|
||||
|
||||
### Using the Node.JS package
|
||||
|
||||
You can download the Node.JS bindings using `npm`:
|
||||
|
||||
```bash
|
||||
npm install deepspeech
|
||||
```
|
||||
|
||||
Please note that as of now, we only support Node.JS versions 4, 5 and 6. Once [SWIG has support](https://github.com/swig/swig/pull/968) we can build for newer versions.
|
||||
|
||||
Alternatively, if you're using Linux and have a supported NVIDIA GPU, you can install the GPU specific package as follows:
|
||||
|
||||
```bash
|
||||
npm install deepspeech-gpu
|
||||
```
|
||||
|
||||
See the [release notes](https://github.com/mozilla/DeepSpeech/releases) to find which GPUs are supported. Please ensure you have the required [CUDA dependency](#cuda-dependency).
|
||||
|
||||
See [client.js](native_client/javascript/client.js) for an example of how to use the bindings. Or download the [wav example](examples/nodejs_wav).
|
||||
|
||||
|
||||
### Using the Command-Line client
|
||||
|
||||
To download the pre-built binaries for the `deepspeech` command-line (compiled C++) client, use `util/taskcluster.py`:
|
||||
|
||||
```bash
|
||||
python3 util/taskcluster.py --target .
|
||||
```
|
||||
|
||||
or if you're on macOS:
|
||||
|
||||
```bash
|
||||
python3 util/taskcluster.py --arch osx --target .
|
||||
```
|
||||
|
||||
also, if you need some binaries different than current master, like `v0.2.0-alpha.6`, you can use `--branch`:
|
||||
|
||||
```bash
|
||||
python3 util/taskcluster.py --branch "v0.2.0-alpha.6" --target "."
|
||||
```
|
||||
|
||||
The script `taskcluster.py` will download `native_client.tar.xz` (which includes the `deepspeech` binary, `generate_trie` and associated libraries) and extract it into the current folder. Also, `taskcluster.py` will download binaries for Linux/x86_64 by default, but you can override that behavior with the `--arch` parameter. See the help info with `python util/taskcluster.py -h` for more details. Specific branches of DeepSpeech or TensorFlow can be specified as well.
|
||||
|
||||
Note: the following command assumes you [downloaded the pre-trained model](#getting-the-pre-trained-model).
|
||||
|
||||
```bash
|
||||
./deepspeech --model models/output_graph.pbmm --alphabet models/alphabet.txt --lm models/lm.binary --trie models/trie --audio audio_input.wav
|
||||
```
|
||||
|
||||
See the help output with `./deepspeech -h` and the [native client README](native_client/README.md) for more details.
|
||||
|
||||
### Installing bindings from source
|
||||
|
||||
If pre-built binaries aren't available for your system, you'll need to install them from scratch. Follow these [`native_client` installation instructions](native_client/README.md).
|
||||
|
||||
### Third party bindings
|
||||
|
||||
In addition to the bindings above, third party developers have started to provide bindings to other languages:
|
||||
|
||||
* [Asticode](https://github.com/asticode) provides [Golang](https://golang.org) bindings in its [go-astideepspeech](https://github.com/asticode/go-astideepspeech) repo.
|
||||
* [RustAudio](https://github.com/RustAudio) provide a [Rust](https://www.rust-lang.org) binding, the installation and use of which is described in their [deepspeech-rs](https://github.com/RustAudio/deepspeech-rs) repo.
|
||||
* [stes](https://github.com/stes) provides preliminary [PKGBUILDs](https://wiki.archlinux.org/index.php/PKGBUILD) to install the client and python bindings on [Arch Linux](https://www.archlinux.org/) in the [arch-deepspeech](https://github.com/stes/arch-deepspeech) repo.
|
||||
* [gst-deepspeech](https://github.com/Elleo/gst-deepspeech) provides a [GStreamer](https://gstreamer.freedesktop.org/) plugin which can be used from any language with GStreamer bindings.
|
||||
|
||||
## Training Your Own Model
|
||||
|
||||
### Prerequisites for training a model
|
||||
|
||||
* [Python 3.6](https://www.python.org/)
|
||||
* [Git Large File Storage](https://git-lfs.github.com/)
|
||||
* Mac or Linux environment
|
||||
|
||||
### Getting the training code
|
||||
|
||||
Install [Git Large File Storage](https://git-lfs.github.com/) either manually or through a package-manager if available on your system. Then clone the DeepSpeech repository normally:
|
||||
|
||||
```bash
|
||||
git clone https://github.com/mozilla/DeepSpeech
|
||||
```
|
||||
|
||||
### Creating a virtual environment
|
||||
|
||||
In creating a virtual environment you will create a directory containing a `python3` binary and everything needed to run deepspeech. You can use whatever directory you want. For the purpose of the documentation, we will rely on `$HOME/tmp/deepspeech-train-venv`. You can create it using this command:
|
||||
|
||||
```
|
||||
$ virtualenv -p python3 $HOME/tmp/deepspeech-train-venv/
|
||||
```
|
||||
|
||||
Once this command completes successfully, the environment will be ready to be activated.
|
||||
|
||||
### Activating the environment
|
||||
|
||||
Each time you need to work with DeepSpeech, you have to *activate* this virtual environment. This is done with this simple command:
|
||||
|
||||
```
|
||||
$ source $HOME/tmp/deepspeech-train-venv/bin/activate
|
||||
```
|
||||
|
||||
### Installing Python dependencies
|
||||
|
||||
Install the required dependencies using `pip3`:
|
||||
|
||||
```bash
|
||||
cd DeepSpeech
|
||||
pip3 install -r requirements.txt
|
||||
```
|
||||
|
||||
You'll also need to install the `ds_ctcdecoder` Python package. `ds_ctcdecoder` is required for decoding the outputs of the `deepspeech` acoustic model into text. You can use `util/taskcluster.py` with the `--decoder` flag to get a URL to a binary of the decoder package appropriate for your platform and Python version:
|
||||
|
||||
```bash
|
||||
pip3 install $(python3 util/taskcluster.py --decoder)
|
||||
```
|
||||
|
||||
This command will download and install the `ds_ctcdecoder` package. You can override the platform with `--arch` if you want the package for ARM7 (`--arch arm`) or ARM64 (`--arch arm64`). If you prefer building the `ds_ctcdecoder` package from source, see the [native_client README file](native_client/README.md).
|
||||
|
||||
### Recommendations
|
||||
|
||||
If you have a capable (NVIDIA, at least 8GB of VRAM) GPU, it is highly recommended to install TensorFlow with GPU support. Training will be significantly faster than using the CPU. To enable GPU support, you can do:
|
||||
|
||||
```bash
|
||||
pip3 uninstall tensorflow
|
||||
pip3 install 'tensorflow-gpu==1.14.0'
|
||||
```
|
||||
|
||||
Please ensure you have the required [CUDA dependency](#cuda-dependency).
|
||||
|
||||
It has been reported for some people failure at training:
|
||||
```
|
||||
tensorflow.python.framework.errors_impl.UnknownError: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
|
||||
[[{{node tower_0/conv1d/Conv2D}}]]
|
||||
```
|
||||
|
||||
Setting the `TF_FORCE_GPU_ALLOW_GROWTH` environment variable to `true` seems to help in such cases. This could also be due to an incorrect version of libcudnn. Double check your versions with the [TensorFlow 1.14 documentation](#cuda-dependency).
|
||||
|
||||
### Common Voice training data
|
||||
|
||||
The Common Voice corpus consists of voice samples that were donated through Mozilla's [Common Voice](https://voice.mozilla.org/) Initiative.
|
||||
You can download individual CommonVoice v2.0 language data sets from [here](https://voice.mozilla.org/data).
|
||||
After extraction of such a data set, you'll find the following contents:
|
||||
- the `*.tsv` files output by CorporaCreator for the downloaded language
|
||||
- the mp3 audio files they reference in a `clips` sub-directory.
|
||||
|
||||
For bringing this data into a form that DeepSpeech understands, you have to run the CommonVoice v2.0 importer (`bin/import_cv2.py`):
|
||||
|
||||
```bash
|
||||
bin/import_cv2.py --filter_alphabet path/to/some/alphabet.txt /path/to/extracted/language/archive
|
||||
```
|
||||
|
||||
Providing a filter alphabet is optional. It will exclude all samples whose transcripts contain characters not in the specified alphabet.
|
||||
Running the importer with `-h` will show you some additional options.
|
||||
|
||||
Once the import is done, the `clips` sub-directory will contain for each required `.mp3` an additional `.wav` file.
|
||||
It will also add the following `.csv` files:
|
||||
|
||||
- `clips/train.csv`
|
||||
- `clips/dev.csv`
|
||||
- `clips/test.csv`
|
||||
|
||||
All entries in these CSV files refer to their samples by absolute paths. So moving this sub-directory would require another import or tweaking the CSV files accordingly.
|
||||
|
||||
To use Common Voice data during training, validation and testing, you pass (comma separated combinations of) their filenames into `--train_files`, `--dev_files`, `--test_files` parameters of `DeepSpeech.py`.
|
||||
|
||||
If, for example, Common Voice language `en` was extracted to `../data/CV/en/`, `DeepSpeech.py` could be called like this:
|
||||
|
||||
```bash
|
||||
./DeepSpeech.py --train_files ../data/CV/en/clips/train.csv --dev_files ../data/CV/en/clips/dev.csv --test_files ../data/CV/en/clips/test.csv
|
||||
```
|
||||
|
||||
### Training a model
|
||||
|
||||
The central (Python) script is `DeepSpeech.py` in the project's root directory. For its list of command line options, you can call:
|
||||
|
||||
```bash
|
||||
./DeepSpeech.py --helpfull
|
||||
```
|
||||
|
||||
To get the output of this in a slightly better-formatted way, you can also look up the option definitions in [`util/flags.py`](util/flags.py).
|
||||
|
||||
For executing pre-configured training scenarios, there is a collection of convenience scripts in the `bin` folder. Most of them are named after the corpora they are configured for. Keep in mind that most speech corpora are *very large*, on the order of tens of gigabytes, and some aren't free. Downloading and preprocessing them can take a very long time, and training on them without a fast GPU (GTX 10 series or newer recommended) takes even longer.
|
||||
|
||||
**If you experience GPU OOM errors while training, try reducing the batch size with the `--train_batch_size`, `--dev_batch_size` and `--test_batch_size` parameters.**
|
||||
|
||||
As a simple first example you can open a terminal, change to the directory of the DeepSpeech checkout, activate the virtualenv created above, and run:
|
||||
|
||||
```bash
|
||||
./bin/run-ldc93s1.sh
|
||||
```
|
||||
|
||||
This script will train on a small sample dataset composed of just a single audio file, the sample file for the [TIMIT Acoustic-Phonetic Continuous Speech Corpus](https://catalog.ldc.upenn.edu/LDC93S1), which can be overfitted on a GPU in a few minutes for demonstration purposes. From here, you can alter any variables with regards to what dataset is used, how many training iterations are run and the default values of the network parameters.
|
||||
|
||||
Feel also free to pass additional (or overriding) `DeepSpeech.py` parameters to these scripts. Then, just run the script to train the modified network.
|
||||
|
||||
Each dataset has a corresponding importer script in `bin/` that can be used to download (if it's freely available) and preprocess the dataset. See `bin/import_librivox.py` for an example of how to import and preprocess a large dataset for training with DeepSpeech.
|
||||
|
||||
If you've run the old importers (in `util/importers/`), they could have removed source files that are needed for the new importers to run. In that case, simply remove the extracted folders and let the importer extract and process the dataset from scratch, and things should work.
|
||||
|
||||
### Checkpointing
|
||||
|
||||
During training of a model so-called checkpoints will get stored on disk. This takes place at a configurable time interval. The purpose of checkpoints is to allow interruption (also in the case of some unexpected failure) and later continuation of training without losing hours of training time. Resuming from checkpoints happens automatically by just (re)starting training with the same `--checkpoint_dir` of the former run.
|
||||
|
||||
Be aware however that checkpoints are only valid for the same model geometry they had been generated from. In other words: If there are error messages of certain `Tensors` having incompatible dimensions, this is most likely due to an incompatible model change. One usual way out would be to wipe all checkpoint files in the checkpoint directory or changing it before starting the training.
|
||||
|
||||
### Exporting a model for inference
|
||||
|
||||
If the `--export_dir` parameter is provided, a model will have been exported to this directory during training.
|
||||
Refer to the corresponding [README.md](native_client/README.md) for information on building and running a client that can use the exported model.
|
||||
|
||||
### Exporting a model for TFLite
|
||||
|
||||
If you want to experiment with the TF Lite engine, you need to export a model that is compatible with it, then use the `--export_tflite` flags. If you already have a trained model, you can re-export it for TFLite by running `DeepSpeech.py` again and specifying the same `checkpoint_dir` that you used for training, as well as passing `--export_tflite --export_dir /model/export/destination`.
|
||||
|
||||
### Making a mmap-able model for inference
|
||||
|
||||
The `output_graph.pb` model file generated in the above step will be loaded in memory to be dealt with when running inference.
|
||||
This will result in extra loading time and memory consumption. One way to avoid this is to directly read data from the disk.
|
||||
|
||||
TensorFlow has tooling to achieve this: it requires building the target `//tensorflow/contrib/util:convert_graphdef_memmapped_format` (binaries are produced by our TaskCluster for some systems including Linux/amd64 and macOS/amd64), use `util/taskcluster.py` tool to download, specifying `tensorflow` as a source and `convert_graphdef_memmapped_format` as artifact.
|
||||
|
||||
Producing a mmap-able model is as simple as:
|
||||
|
||||
```
|
||||
$ convert_graphdef_memmapped_format --in_graph=output_graph.pb --out_graph=output_graph.pbmm
|
||||
```
|
||||
|
||||
Upon sucessfull run, it should report about conversion of a non-zero number of nodes. If it reports converting `0` nodes, something is wrong: make sure your model is a frozen one, and that you have not applied any incompatible changes (this includes `quantize_weights`).
|
||||
|
||||
### Continuing training from a release model
|
||||
|
||||
If you'd like to use one of the pre-trained models released by Mozilla to bootstrap your training process (transfer learning, fine tuning), you can do so by using the `--checkpoint_dir` flag in `DeepSpeech.py`. Specify the path where you downloaded the checkpoint from the release, and training will resume from the pre-trained model.
|
||||
|
||||
For example, if you want to fine tune the entire graph using your own data in `my-train.csv`, `my-dev.csv` and `my-test.csv`, for three epochs, you can something like the following, tuning the hyperparameters as needed:
|
||||
|
||||
```bash
|
||||
mkdir fine_tuning_checkpoints
|
||||
python3 DeepSpeech.py --n_hidden 2048 --checkpoint_dir path/to/checkpoint/folder --epochs 3 --train_files my-train.csv --dev_files my-dev.csv --test_files my_dev.csv --learning_rate 0.0001
|
||||
```
|
||||
|
||||
Note: the released models were trained with `--n_hidden 2048`, so you need to use that same value when initializing from the release models.
|
||||
|
||||
### Training with augmentation
|
||||
|
||||
Augmentation is a useful technique for better generalization of machine learning models. Thus, a pre-processing pipeline with various augmentation techniques on raw pcm and spectrogram has been implemented and can be used while training the model. Following are the available augmentation techniques that can be enabled at training time by using the corresponding flags in the command line.
|
||||
|
||||
#### Audio Augmentation
|
||||
1. **Standard deviation for Gaussian additive noise:** ```--data_aug_features_additive```
|
||||
2. **Standard deviation for Normal distribution around 1 for multiplicative noise:** ```--data_aug_features_multiplicative```
|
||||
3. **Standard deviation for speeding-up tempo. If Standard deviation is 0, this augmentation is not performed:** ```--augmentation_speed_up_std```
|
||||
|
||||
#### Spectrogram Augmentation
|
||||
Inspired by Google Paper on [SpecAugment: A Simple Data Augmentation Method for Automatic Speech Recognition]( https://arxiv.org/abs/1904.08779)
|
||||
1. **Keep rate of dropout augmentation on a spectrogram (if 1, no dropout will be performed on the spectrogram)**:
|
||||
* Keep Rate : ```--augmentation_spec_dropout_keeprate value between range [0 - 1]```
|
||||
|
||||
2. **Whether to use frequency and time masking augmentation:**
|
||||
* Enable / Disable : ```--augmentation_freq_and_time_masking / --noaugmentation_freq_and_time_masking```
|
||||
* Max range of masks in the frequency domain when performing freqtime-mask augmentation: ```--augmentation_freq_and_time_masking_freq_mask_range eg: 5```
|
||||
* Number of masks in the frequency domain when performing freqtime-mask augmentation: ```--augmentation_freq_and_time_masking_number_freq_masks eg: 3```
|
||||
* Max range of masks in the time domain when performing freqtime-mask augmentation: ```--augmentation_freq_and_time_masking_time_mask_rangee eg: 2```
|
||||
* Number of masks in the time domain when performing freqtime-mask augmentation: ```augmentation_freq_and_time_masking_number_time_masks eg: 3 ```
|
||||
|
||||
3. **Whether to use spectrogram speed and tempo scaling:**
|
||||
* Enable / Disable : ```--augmentation_pitch_and_tempo_scaling / --noaugmentation_pitch_and_tempo_scaling.```
|
||||
* Min value of pitch scaling: ```--augmentation_pitch_and_tempo_scaling_min_pitch eg:0.95 ```
|
||||
* Max value of pitch scaling: ```--augmentation_pitch_and_tempo_scaling_max_pitch eg:1.2```
|
||||
* Max value of tempo scaling: ```--augmentation_pitch_and_tempo_scaling_max_tempo eg:1.2```
|
||||
|
||||
|
||||
## Contribution guidelines
|
||||
|
||||
This repository is governed by Mozilla's code of conduct and etiquette guidelines. For more details, please read the [Mozilla Community Participation Guidelines](https://www.mozilla.org/about/governance/policies/participation/).
|
||||
|
||||
Before making a Pull Request, check your changes for basic mistakes and style problems by using a linter. We have cardboardlinter setup in this repository, so for example, if you've made some changes and would like to run the linter on just the changed code, you can use the follow command:
|
||||
|
||||
```bash
|
||||
pip install pylint cardboardlint
|
||||
cardboardlinter --refspec master
|
||||
```
|
||||
|
||||
This will compare the code against master and run the linter on all the changes. We plan to introduce more linter checks (e.g. for C++) in the future. To run it automatically as a git pre-commit hook, do the following:
|
||||
|
||||
```bash
|
||||
cat <<\EOF > .git/hooks/pre-commit
|
||||
#!/bin/bash
|
||||
if [ ! -x "$(command -v cardboardlinter)" ]; then
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# First, stash index and work dir, keeping only the
|
||||
# to-be-committed changes in the working directory.
|
||||
echo "Stashing working tree changes..." 1>&2
|
||||
old_stash=$(git rev-parse -q --verify refs/stash)
|
||||
git stash save -q --keep-index
|
||||
new_stash=$(git rev-parse -q --verify refs/stash)
|
||||
|
||||
# If there were no changes (e.g., `--amend` or `--allow-empty`)
|
||||
# then nothing was stashed, and we should skip everything,
|
||||
# including the tests themselves. (Presumably the tests passed
|
||||
# on the previous commit, so there is no need to re-run them.)
|
||||
if [ "$old_stash" = "$new_stash" ]; then
|
||||
echo "No changes, skipping lint." 1>&2
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Run tests
|
||||
cardboardlinter --refspec HEAD -n auto
|
||||
status=$?
|
||||
|
||||
# Restore changes
|
||||
echo "Restoring working tree changes..." 1>&2
|
||||
git reset --hard -q && git stash apply --index -q && git stash drop -q
|
||||
|
||||
# Exit with status from test-run: nonzero prevents commit
|
||||
exit $status
|
||||
EOF
|
||||
chmod +x .git/hooks/pre-commit
|
||||
```
|
||||
|
||||
This will run the linters on just the changes made in your commit.
|
||||
|
||||
## Contact/Getting Help
|
||||
|
||||
There are several ways to contact us or to get help:
|
||||
|
||||
1. [**FAQ**](https://github.com/mozilla/DeepSpeech/wiki#frequently-asked-questions) - We have a list of common questions, and their answers, in our [FAQ](https://github.com/mozilla/DeepSpeech/wiki#frequently-asked-questions). When just getting started, it's best to first check the [FAQ](https://github.com/mozilla/DeepSpeech/wiki#frequently-asked-questions) to see if your question is addressed.
|
||||
|
||||
2. [**Discourse Forums**](https://discourse.mozilla.org/c/deep-speech) - If your question is not addressed in the [FAQ](https://github.com/mozilla/DeepSpeech/wiki#frequently-asked-questions), the [Discourse Forums](https://discourse.mozilla.org/c/deep-speech) is the next place to look. They contain conversations on [General Topics](https://discourse.mozilla.org/t/general-topics/21075), [Using Deep Speech](https://discourse.mozilla.org/t/using-deep-speech/21076/4), and [Deep Speech Development](https://discourse.mozilla.org/t/deep-speech-development/21077).
|
||||
|
||||
3. [**IRC**](https://wiki.mozilla.org/IRC) - If your question is not addressed by either the [FAQ](https://github.com/mozilla/DeepSpeech/wiki#frequently-asked-questions) or [Discourse Forums](https://discourse.mozilla.org/c/deep-speech), you can contact us on the `#machinelearning` channel on [Mozilla IRC](https://wiki.mozilla.org/IRC); people there can try to answer/help
|
||||
|
||||
4. [**Issues**](https://github.com/mozilla/deepspeech/issues) - Finally, if all else fails, you can open an issue in our repo.
|
||||
|
|
@ -0,0 +1,91 @@
|
|||
Project DeepSpeech
|
||||
==================
|
||||
|
||||
|
||||
.. image:: https://readthedocs.org/projects/deepspeech/badge/?version=latest
|
||||
:target: http://deepspeech.readthedocs.io/?badge=latest
|
||||
:alt: Documentation
|
||||
|
||||
|
||||
.. image:: https://github.taskcluster.net/v1/repository/mozilla/DeepSpeech/master/badge.svg
|
||||
:target: https://github.taskcluster.net/v1/repository/mozilla/DeepSpeech/master/latest
|
||||
:alt: Task Status
|
||||
|
||||
|
||||
DeepSpeech is an open source Speech-To-Text engine, using a model trained by machine learning techniques based on `Baidu's Deep Speech research paper <https://arxiv.org/abs/1412.5567>`_. Project DeepSpeech uses Google's `TensorFlow <https://www.tensorflow.org/>`_ to make the implementation easier.
|
||||
|
||||
To install and use deepspeech all you have to do is:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# Create and activate a virtualenv
|
||||
virtualenv -p python3 $HOME/tmp/deepspeech-venv/
|
||||
source $HOME/tmp/deepspeech-venv/bin/activate
|
||||
|
||||
# Install DeepSpeech
|
||||
pip3 install deepspeech
|
||||
|
||||
# Download pre-trained English model and extract
|
||||
curl -LO https://github.com/mozilla/DeepSpeech/releases/download/v0.5.1/deepspeech-0.5.1-models.tar.gz
|
||||
tar xvf deepspeech-0.5.1-models.tar.gz
|
||||
|
||||
# Download example audio files
|
||||
curl -LO https://github.com/mozilla/DeepSpeech/releases/download/v0.5.1/audio-0.5.1.tar.gz
|
||||
tar xvf audio-0.5.1.tar.gz
|
||||
|
||||
# Transcribe an audio file
|
||||
deepspeech --model deepspeech-0.5.1-models/output_graph.pbmm --alphabet deepspeech-0.5.1-models/alphabet.txt --lm deepspeech-0.5.1-models/lm.binary --trie deepspeech-0.5.1-models/trie --audio audio/2830-3980-0043.wav
|
||||
|
||||
A pre-trained English model is available for use and can be downloaded using `the instructions below <#using-a-pre-trained-model>`_. Currently, only 16-bit, 16 kHz, mono-channel WAVE audio files are supported in the Python client. A package with some example audio files is available for download in our `release notes <https://github.com/mozilla/DeepSpeech/releases/latest>`_.
|
||||
|
||||
Quicker inference can be performed using a supported NVIDIA GPU on Linux. See the `release notes <https://github.com/mozilla/DeepSpeech/releases/latest>`_ to find which GPUs are supported. To run ``deepspeech`` on a GPU, install the GPU specific package:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# Create and activate a virtualenv
|
||||
virtualenv -p python3 $HOME/tmp/deepspeech-gpu-venv/
|
||||
source $HOME/tmp/deepspeech-gpu-venv/bin/activate
|
||||
|
||||
# Install DeepSpeech CUDA enabled package
|
||||
pip3 install deepspeech-gpu
|
||||
|
||||
# Transcribe an audio file.
|
||||
deepspeech --model deepspeech-0.5.1-models/output_graph.pbmm --alphabet deepspeech-0.5.1-models/alphabet.txt --lm deepspeech-0.5.1-models/lm.binary --trie deepspeech-0.5.1-models/trie --audio audio/2830-3980-0043.wav
|
||||
|
||||
Please ensure you have the required `CUDA dependencies <#cuda-dependency>`_.
|
||||
|
||||
See the output of ``deepspeech -h`` for more information on the use of ``deepspeech``. (If you experience problems running ``deepspeech``\ , please check `required runtime dependencies <native_client/README.md#required-dependencies>`_\ ).
|
||||
|
||||
----
|
||||
|
||||
**Table of Contents**
|
||||
|
||||
|
||||
* `Using a Pre-trained Model <USING.rst#using-a-pre-trained-model>`_
|
||||
|
||||
* `CUDA dependency <USING.rst#cuda-dependency>`_
|
||||
* `Getting the pre-trained model <USING.rst#getting-the-pre-trained-model>`_
|
||||
* `Model compatibility <USING.rst#model-compatibility>`_
|
||||
* `Using the Python package <USING.rst#using-the-python-package>`_
|
||||
* `Using the Node.JS package <USING.rst#using-the-nodejs-package>`_
|
||||
* `Using the Command Line client <USING.rst#using-the-command-line-client>`_
|
||||
* `Installing bindings from source <USING.rst#installing-bindings-from-source>`_
|
||||
* `Third party bindings <USING.rst#third-party-bindings>`_
|
||||
|
||||
* `Training your own Model <TRAINING.rst#training-your-own-model>`_
|
||||
|
||||
* `Prerequisites for training a model <TRAINING.rst#prerequisites-for-training-a-model>`_
|
||||
* `Getting the training code <TRAINING.rst#getting-the-training-code>`_
|
||||
* `Installing Python dependencies <TRAINING.rst#installing-python-dependencies>`_
|
||||
* `Recommendations <TRAINING.rst#recommendations>`_
|
||||
* `Common Voice training data <TRAINING.rst#common-voice-training-data>`_
|
||||
* `Training a model <TRAINING.rst#training-a-model>`_
|
||||
* `Checkpointing <TRAINING.rst#checkpointing>`_
|
||||
* `Exporting a model for inference <TRAINING.rst#exporting-a-model-for-inference>`_
|
||||
* `Exporting a model for TFLite <TRAINING.rst#exporting-a-model-for-tflite>`_
|
||||
* `Making a mmap-able model for inference <TRAINING.rst#making-a-mmap-able-model-for-inference>`_
|
||||
* `Continuing training from a release model <TRAINING.rst#continuing-training-from-a-release-model>`_
|
||||
* `Training with Augmentation <TRAINING.rst#training-with-augmentation>`_
|
||||
|
||||
* `Contribution guidelines <CONTRIBUTING.rst>`_
|
||||
* `Contact/Getting Help <SUPPORT.rst>`_
|
|
@ -1,9 +0,0 @@
|
|||
Making a (new) release of the codebase
|
||||
======================================
|
||||
- Update version in VERSION file, commit
|
||||
- Open PR, ensure all tests are passing properly
|
||||
- Merge the PR
|
||||
- Fetch the new master, tag it with (hopefully) the same version as in VERSION
|
||||
- Push that to Github
|
||||
- New build should be triggered and new packages should be made
|
||||
- TaskCluster should schedule a merge build **including** a "DeepSpeech Packages" task
|
|
@ -0,0 +1,12 @@
|
|||
|
||||
Making a (new) release of the codebase
|
||||
======================================
|
||||
|
||||
|
||||
* Update version in VERSION file, commit
|
||||
* Open PR, ensure all tests are passing properly
|
||||
* Merge the PR
|
||||
* Fetch the new master, tag it with (hopefully) the same version as in VERSION
|
||||
* Push that to Github
|
||||
* New build should be triggered and new packages should be made
|
||||
* TaskCluster should schedule a merge build **including** a "DeepSpeech Packages" task
|
|
@ -0,0 +1,17 @@
|
|||
Contact/Getting Help
|
||||
====================
|
||||
|
||||
There are several ways to contact us or to get help:
|
||||
|
||||
|
||||
#.
|
||||
`\ **FAQ** <https://github.com/mozilla/DeepSpeech/wiki#frequently-asked-questions>`_ - We have a list of common questions, and their answers, in our `FAQ <https://github.com/mozilla/DeepSpeech/wiki#frequently-asked-questions>`_. When just getting started, it's best to first check the `FAQ <https://github.com/mozilla/DeepSpeech/wiki#frequently-asked-questions>`_ to see if your question is addressed.
|
||||
|
||||
#.
|
||||
`\ **Discourse Forums** <https://discourse.mozilla.org/c/deep-speech>`_ - If your question is not addressed in the `FAQ <https://github.com/mozilla/DeepSpeech/wiki#frequently-asked-questions>`_\ , the `Discourse Forums <https://discourse.mozilla.org/c/deep-speech>`_ is the next place to look. They contain conversations on `General Topics <https://discourse.mozilla.org/t/general-topics/21075>`_\ , `Using Deep Speech <https://discourse.mozilla.org/t/using-deep-speech/21076/4>`_\ , and `Deep Speech Development <https://discourse.mozilla.org/t/deep-speech-development/21077>`_.
|
||||
|
||||
#.
|
||||
`\ **IRC** <https://wiki.mozilla.org/IRC>`_ - If your question is not addressed by either the `FAQ <https://github.com/mozilla/DeepSpeech/wiki#frequently-asked-questions>`_ or `Discourse Forums <https://discourse.mozilla.org/c/deep-speech>`_\ , you can contact us on the ``#machinelearning`` channel on `Mozilla IRC <https://wiki.mozilla.org/IRC>`_\ ; people there can try to answer/help
|
||||
|
||||
#.
|
||||
`\ **Issues** <https://github.com/mozilla/deepspeech/issues>`_ - Finally, if all else fails, you can open an issue in our repo.
|
|
@ -0,0 +1,238 @@
|
|||
Training Your Own Model
|
||||
=======================
|
||||
|
||||
Prerequisites for training a model
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
|
||||
* `Python 3.6 <https://www.python.org/>`_
|
||||
* `Git Large File Storage <https://git-lfs.github.com/>`_
|
||||
* Mac or Linux environment
|
||||
|
||||
Getting the training code
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Install `Git Large File Storage <https://git-lfs.github.com/>`_ either manually or through a package-manager if available on your system. Then clone the DeepSpeech repository normally:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
git clone https://github.com/mozilla/DeepSpeech
|
||||
|
||||
Creating a virtual environment
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
In creating a virtual environment you will create a directory containing a ``python3`` binary and everything needed to run deepspeech. You can use whatever directory you want. For the purpose of the documentation, we will rely on ``$HOME/tmp/deepspeech-train-venv``. You can create it using this command:
|
||||
|
||||
.. code-block::
|
||||
|
||||
$ virtualenv -p python3 $HOME/tmp/deepspeech-train-venv/
|
||||
|
||||
Once this command completes successfully, the environment will be ready to be activated.
|
||||
|
||||
Activating the environment
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Each time you need to work with DeepSpeech, you have to *activate* this virtual environment. This is done with this simple command:
|
||||
|
||||
.. code-block::
|
||||
|
||||
$ source $HOME/tmp/deepspeech-train-venv/bin/activate
|
||||
|
||||
Installing Python dependencies
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Install the required dependencies using ``pip3``\ :
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
cd DeepSpeech
|
||||
pip3 install -r requirements.txt
|
||||
|
||||
You'll also need to install the ``ds_ctcdecoder`` Python package. ``ds_ctcdecoder`` is required for decoding the outputs of the ``deepspeech`` acoustic model into text. You can use ``util/taskcluster.py`` with the ``--decoder`` flag to get a URL to a binary of the decoder package appropriate for your platform and Python version:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
pip3 install $(python3 util/taskcluster.py --decoder)
|
||||
|
||||
This command will download and install the ``ds_ctcdecoder`` package. You can override the platform with ``--arch`` if you want the package for ARM7 (\ ``--arch arm``\ ) or ARM64 (\ ``--arch arm64``\ ). If you prefer building the ``ds_ctcdecoder`` package from source, see the `native_client README file <native_client/README.md>`_.
|
||||
|
||||
Recommendations
|
||||
^^^^^^^^^^^^^^^
|
||||
|
||||
If you have a capable (NVIDIA, at least 8GB of VRAM) GPU, it is highly recommended to install TensorFlow with GPU support. Training will be significantly faster than using the CPU. To enable GPU support, you can do:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
pip3 uninstall tensorflow
|
||||
pip3 install 'tensorflow-gpu==1.14.0'
|
||||
|
||||
Please ensure you have the required `CUDA dependency <#cuda-dependency>`_.
|
||||
|
||||
It has been reported for some people failure at training:
|
||||
|
||||
.. code-block::
|
||||
|
||||
tensorflow.python.framework.errors_impl.UnknownError: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
|
||||
[[{{node tower_0/conv1d/Conv2D}}]]
|
||||
|
||||
Setting the ``TF_FORCE_GPU_ALLOW_GROWTH`` environment variable to ``true`` seems to help in such cases. This could also be due to an incorrect version of libcudnn. Double check your versions with the `TensorFlow 1.14 documentation <#cuda-dependency>`_.
|
||||
|
||||
Common Voice training data
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
The Common Voice corpus consists of voice samples that were donated through Mozilla's `Common Voice <https://voice.mozilla.org/>`_ Initiative.
|
||||
You can download individual CommonVoice v2.0 language data sets from `here <https://voice.mozilla.org/data>`_.
|
||||
After extraction of such a data set, you'll find the following contents:
|
||||
|
||||
|
||||
* the ``*.tsv`` files output by CorporaCreator for the downloaded language
|
||||
* the mp3 audio files they reference in a ``clips`` sub-directory.
|
||||
|
||||
For bringing this data into a form that DeepSpeech understands, you have to run the CommonVoice v2.0 importer (\ ``bin/import_cv2.py``\ ):
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
bin/import_cv2.py --filter_alphabet path/to/some/alphabet.txt /path/to/extracted/language/archive
|
||||
|
||||
Providing a filter alphabet is optional. It will exclude all samples whose transcripts contain characters not in the specified alphabet.
|
||||
Running the importer with ``-h`` will show you some additional options.
|
||||
|
||||
Once the import is done, the ``clips`` sub-directory will contain for each required ``.mp3`` an additional ``.wav`` file.
|
||||
It will also add the following ``.csv`` files:
|
||||
|
||||
|
||||
* ``clips/train.csv``
|
||||
* ``clips/dev.csv``
|
||||
* ``clips/test.csv``
|
||||
|
||||
All entries in these CSV files refer to their samples by absolute paths. So moving this sub-directory would require another import or tweaking the CSV files accordingly.
|
||||
|
||||
To use Common Voice data during training, validation and testing, you pass (comma separated combinations of) their filenames into ``--train_files``\ , ``--dev_files``\ , ``--test_files`` parameters of ``DeepSpeech.py``.
|
||||
|
||||
If, for example, Common Voice language ``en`` was extracted to ``../data/CV/en/``\ , ``DeepSpeech.py`` could be called like this:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
./DeepSpeech.py --train_files ../data/CV/en/clips/train.csv --dev_files ../data/CV/en/clips/dev.csv --test_files ../data/CV/en/clips/test.csv
|
||||
|
||||
Training a model
|
||||
^^^^^^^^^^^^^^^^
|
||||
|
||||
The central (Python) script is ``DeepSpeech.py`` in the project's root directory. For its list of command line options, you can call:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
./DeepSpeech.py --helpfull
|
||||
|
||||
To get the output of this in a slightly better-formatted way, you can also look up the option definitions in `\ ``util/flags.py`` <util/flags.py>`_.
|
||||
|
||||
For executing pre-configured training scenarios, there is a collection of convenience scripts in the ``bin`` folder. Most of them are named after the corpora they are configured for. Keep in mind that most speech corpora are *very large*\ , on the order of tens of gigabytes, and some aren't free. Downloading and preprocessing them can take a very long time, and training on them without a fast GPU (GTX 10 series or newer recommended) takes even longer.
|
||||
|
||||
**If you experience GPU OOM errors while training, try reducing the batch size with the ``--train_batch_size``\ , ``--dev_batch_size`` and ``--test_batch_size`` parameters.**
|
||||
|
||||
As a simple first example you can open a terminal, change to the directory of the DeepSpeech checkout, activate the virtualenv created above, and run:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
./bin/run-ldc93s1.sh
|
||||
|
||||
This script will train on a small sample dataset composed of just a single audio file, the sample file for the `TIMIT Acoustic-Phonetic Continuous Speech Corpus <https://catalog.ldc.upenn.edu/LDC93S1>`_\ , which can be overfitted on a GPU in a few minutes for demonstration purposes. From here, you can alter any variables with regards to what dataset is used, how many training iterations are run and the default values of the network parameters.
|
||||
|
||||
Feel also free to pass additional (or overriding) ``DeepSpeech.py`` parameters to these scripts. Then, just run the script to train the modified network.
|
||||
|
||||
Each dataset has a corresponding importer script in ``bin/`` that can be used to download (if it's freely available) and preprocess the dataset. See ``bin/import_librivox.py`` for an example of how to import and preprocess a large dataset for training with DeepSpeech.
|
||||
|
||||
If you've run the old importers (in ``util/importers/``\ ), they could have removed source files that are needed for the new importers to run. In that case, simply remove the extracted folders and let the importer extract and process the dataset from scratch, and things should work.
|
||||
|
||||
Checkpointing
|
||||
^^^^^^^^^^^^^
|
||||
|
||||
During training of a model so-called checkpoints will get stored on disk. This takes place at a configurable time interval. The purpose of checkpoints is to allow interruption (also in the case of some unexpected failure) and later continuation of training without losing hours of training time. Resuming from checkpoints happens automatically by just (re)starting training with the same ``--checkpoint_dir`` of the former run.
|
||||
|
||||
Be aware however that checkpoints are only valid for the same model geometry they had been generated from. In other words: If there are error messages of certain ``Tensors`` having incompatible dimensions, this is most likely due to an incompatible model change. One usual way out would be to wipe all checkpoint files in the checkpoint directory or changing it before starting the training.
|
||||
|
||||
Exporting a model for inference
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
If the ``--export_dir`` parameter is provided, a model will have been exported to this directory during training.
|
||||
Refer to the corresponding `README.md <native_client/README.md>`_ for information on building and running a client that can use the exported model.
|
||||
|
||||
Exporting a model for TFLite
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
If you want to experiment with the TF Lite engine, you need to export a model that is compatible with it, then use the ``--export_tflite`` flags. If you already have a trained model, you can re-export it for TFLite by running ``DeepSpeech.py`` again and specifying the same ``checkpoint_dir`` that you used for training, as well as passing ``--export_tflite --export_dir /model/export/destination``.
|
||||
|
||||
Making a mmap-able model for inference
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
The ``output_graph.pb`` model file generated in the above step will be loaded in memory to be dealt with when running inference.
|
||||
This will result in extra loading time and memory consumption. One way to avoid this is to directly read data from the disk.
|
||||
|
||||
TensorFlow has tooling to achieve this: it requires building the target ``//tensorflow/contrib/util:convert_graphdef_memmapped_format`` (binaries are produced by our TaskCluster for some systems including Linux/amd64 and macOS/amd64), use ``util/taskcluster.py`` tool to download, specifying ``tensorflow`` as a source and ``convert_graphdef_memmapped_format`` as artifact.
|
||||
|
||||
Producing a mmap-able model is as simple as:
|
||||
|
||||
.. code-block::
|
||||
|
||||
$ convert_graphdef_memmapped_format --in_graph=output_graph.pb --out_graph=output_graph.pbmm
|
||||
|
||||
Upon sucessfull run, it should report about conversion of a non-zero number of nodes. If it reports converting ``0`` nodes, something is wrong: make sure your model is a frozen one, and that you have not applied any incompatible changes (this includes ``quantize_weights``\ ).
|
||||
|
||||
Continuing training from a release model
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
If you'd like to use one of the pre-trained models released by Mozilla to bootstrap your training process (transfer learning, fine tuning), you can do so by using the ``--checkpoint_dir`` flag in ``DeepSpeech.py``. Specify the path where you downloaded the checkpoint from the release, and training will resume from the pre-trained model.
|
||||
|
||||
For example, if you want to fine tune the entire graph using your own data in ``my-train.csv``\ , ``my-dev.csv`` and ``my-test.csv``\ , for three epochs, you can something like the following, tuning the hyperparameters as needed:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
mkdir fine_tuning_checkpoints
|
||||
python3 DeepSpeech.py --n_hidden 2048 --checkpoint_dir path/to/checkpoint/folder --epochs 3 --train_files my-train.csv --dev_files my-dev.csv --test_files my_dev.csv --learning_rate 0.0001
|
||||
|
||||
Note: the released models were trained with ``--n_hidden 2048``\ , so you need to use that same value when initializing from the release models.
|
||||
|
||||
Training with augmentation
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Augmentation is a useful technique for better generalization of machine learning models. Thus, a pre-processing pipeline with various augmentation techniques on raw pcm and spectrogram has been implemented and can be used while training the model. Following are the available augmentation techniques that can be enabled at training time by using the corresponding flags in the command line.
|
||||
|
||||
Audio Augmentation
|
||||
~~~~~~~~~~~~~~~~~~
|
||||
|
||||
|
||||
#. **Standard deviation for Gaussian additive noise:** ``--data_aug_features_additive``
|
||||
#. **Standard deviation for Normal distribution around 1 for multiplicative noise:** ``--data_aug_features_multiplicative``
|
||||
#. **Standard deviation for speeding-up tempo. If Standard deviation is 0, this augmentation is not performed:** ``--augmentation_speed_up_std``
|
||||
|
||||
Spectrogram Augmentation
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Inspired by Google Paper on `SpecAugment: A Simple Data Augmentation Method for Automatic Speech Recognition <https://arxiv.org/abs/1904.08779>`_
|
||||
|
||||
|
||||
#.
|
||||
**Keep rate of dropout augmentation on a spectrogram (if 1, no dropout will be performed on the spectrogram)**\ :
|
||||
|
||||
|
||||
* Keep Rate : ``--augmentation_spec_dropout_keeprate value between range [0 - 1]``
|
||||
|
||||
#.
|
||||
**Whether to use frequency and time masking augmentation:**
|
||||
|
||||
|
||||
* Enable / Disable : ``--augmentation_freq_and_time_masking / --noaugmentation_freq_and_time_masking``
|
||||
* Max range of masks in the frequency domain when performing freqtime-mask augmentation: ``--augmentation_freq_and_time_masking_freq_mask_range eg: 5``
|
||||
* Number of masks in the frequency domain when performing freqtime-mask augmentation: ``--augmentation_freq_and_time_masking_number_freq_masks eg: 3``
|
||||
* Max range of masks in the time domain when performing freqtime-mask augmentation: ``--augmentation_freq_and_time_masking_time_mask_rangee eg: 2``
|
||||
* Number of masks in the time domain when performing freqtime-mask augmentation: ``augmentation_freq_and_time_masking_number_time_masks eg: 3``
|
||||
|
||||
#.
|
||||
**Whether to use spectrogram speed and tempo scaling:**
|
||||
|
||||
|
||||
* Enable / Disable : ``--augmentation_pitch_and_tempo_scaling / --noaugmentation_pitch_and_tempo_scaling.``
|
||||
* Min value of pitch scaling: ``--augmentation_pitch_and_tempo_scaling_min_pitch eg:0.95``
|
||||
* Max value of pitch scaling: ``--augmentation_pitch_and_tempo_scaling_max_pitch eg:1.2``
|
||||
* Max value of tempo scaling: ``--augmentation_pitch_and_tempo_scaling_max_tempo eg:1.2``
|
||||
|
|
@ -0,0 +1,181 @@
|
|||
Using a Pre-trained Model
|
||||
=========================
|
||||
|
||||
Inference using a DeepSpeech pre-trained model can be done with a client/language binding package. We have four clients/language bindings in this repository, listed below, and also a few community-maintained clients/language bindings in other repositories, listed `further down in this README <#third-party-bindings>`_.
|
||||
|
||||
|
||||
* `The Python package/language binding <#using-the-python-package>`_
|
||||
* `The Node.JS package/language binding <#using-the-nodejs-package>`_
|
||||
* `The Command-Line client <#using-the-command-line-client>`_
|
||||
* `The .NET client/language binding <native_client/dotnet/README.md>`_
|
||||
|
||||
Running ``deepspeech`` might, see below, require some runtime dependencies to be already installed on your system:
|
||||
|
||||
|
||||
* sox - The Python and Node.JS clients use SoX to resample files to 16kHz.
|
||||
* libgomp1 - libsox (statically linked into the clients) depends on OpenMP. Some people have had to install this manually.
|
||||
* libstdc++ - Standard C++ Library implementation. Some people have had to install this manually.
|
||||
* libpthread - On Linux, some people have had to install libpthread manually.
|
||||
|
||||
Please refer to your system's documentation on how to install these dependencies.
|
||||
|
||||
CUDA dependency
|
||||
^^^^^^^^^^^^^^^
|
||||
|
||||
The GPU capable builds (Python, NodeJS, C++, etc) depend on the same CUDA runtime as upstream TensorFlow. Currently with TensorFlow 1.14 it depends on CUDA 10.0 and CuDNN v7.5. `See the TensorFlow documentation <https://www.tensorflow.org/install/gpu>`_.
|
||||
|
||||
Getting the pre-trained model
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
If you want to use the pre-trained English model for performing speech-to-text, you can download it (along with other important inference material) from the DeepSpeech `releases page <https://github.com/mozilla/DeepSpeech/releases>`_. Alternatively, you can run the following command to download and unzip the model files in your current directory:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
wget https://github.com/mozilla/DeepSpeech/releases/download/v0.5.1/deepspeech-0.5.1-models.tar.gz
|
||||
tar xvfz deepspeech-0.5.1-models.tar.gz
|
||||
|
||||
Model compatibility
|
||||
^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
DeepSpeech models are versioned to keep you from trying to use an incompatible graph with a newer client after a breaking change was made to the code. If you get an error saying your model file version is too old for the client, you should either upgrade to a newer model release, re-export your model from the checkpoint using a newer version of the code, or downgrade your client if you need to use the old model and can't re-export it.
|
||||
|
||||
Using the Python package
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Pre-built binaries which can be used for performing inference with a trained model can be installed with ``pip3``. You can then use the ``deepspeech`` binary to do speech-to-text on an audio file:
|
||||
|
||||
For the Python bindings, it is highly recommended that you perform the installation within a Python 3.5 or later virtual environment. You can find more information about those in `this documentation <http://docs.python-guide.org/en/latest/dev/virtualenvs/>`_.
|
||||
|
||||
We will continue under the assumption that you already have your system properly setup to create new virtual environments.
|
||||
|
||||
Create a DeepSpeech virtual environment
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
In creating a virtual environment you will create a directory containing a ``python3`` binary and everything needed to run deepspeech. You can use whatever directory you want. For the purpose of the documentation, we will rely on ``$HOME/tmp/deepspeech-venv``. You can create it using this command:
|
||||
|
||||
.. code-block::
|
||||
|
||||
$ virtualenv -p python3 $HOME/tmp/deepspeech-venv/
|
||||
|
||||
Once this command completes successfully, the environment will be ready to be activated.
|
||||
|
||||
Activating the environment
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Each time you need to work with DeepSpeech, you have to *activate* this virtual environment. This is done with this simple command:
|
||||
|
||||
.. code-block::
|
||||
|
||||
$ source $HOME/tmp/deepspeech-venv/bin/activate
|
||||
|
||||
Installing DeepSpeech Python bindings
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Once your environment has been set-up and loaded, you can use ``pip3`` to manage packages locally. On a fresh setup of the ``virtualenv``\ , you will have to install the DeepSpeech wheel. You can check if ``deepspeech`` is already installed with ``pip3 list``.
|
||||
|
||||
To perform the installation, just use ``pip3`` as such:
|
||||
|
||||
.. code-block::
|
||||
|
||||
$ pip3 install deepspeech
|
||||
|
||||
If ``deepspeech`` is already installed, you can update it as such:
|
||||
|
||||
.. code-block::
|
||||
|
||||
$ pip3 install --upgrade deepspeech
|
||||
|
||||
Alternatively, if you have a supported NVIDIA GPU on Linux, you can install the GPU specific package as follows:
|
||||
|
||||
.. code-block::
|
||||
|
||||
$ pip3 install deepspeech-gpu
|
||||
|
||||
See the `release notes <https://github.com/mozilla/DeepSpeech/releases>`_ to find which GPUs are supported. Please ensure you have the required `CUDA dependency <#cuda-dependency>`_.
|
||||
|
||||
You can update ``deepspeech-gpu`` as follows:
|
||||
|
||||
.. code-block::
|
||||
|
||||
$ pip3 install --upgrade deepspeech-gpu
|
||||
|
||||
In both cases, ``pip3`` should take care of installing all the required dependencies. After installation has finished, you should be able to call ``deepspeech`` from the command-line.
|
||||
|
||||
Note: the following command assumes you `downloaded the pre-trained model <#getting-the-pre-trained-model>`_.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
deepspeech --model models/output_graph.pbmm --alphabet models/alphabet.txt --lm models/lm.binary --trie models/trie --audio my_audio_file.wav
|
||||
|
||||
The arguments ``--lm`` and ``--trie`` are optional, and represent a language model.
|
||||
|
||||
See `client.py <native_client/python/client.py>`_ for an example of how to use the package programatically.
|
||||
|
||||
Using the Node.JS package
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
You can download the Node.JS bindings using ``npm``\ :
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
npm install deepspeech
|
||||
|
||||
Please note that as of now, we only support Node.JS versions 4, 5 and 6. Once `SWIG has support <https://github.com/swig/swig/pull/968>`_ we can build for newer versions.
|
||||
|
||||
Alternatively, if you're using Linux and have a supported NVIDIA GPU, you can install the GPU specific package as follows:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
npm install deepspeech-gpu
|
||||
|
||||
See the `release notes <https://github.com/mozilla/DeepSpeech/releases>`_ to find which GPUs are supported. Please ensure you have the required `CUDA dependency <#cuda-dependency>`_.
|
||||
|
||||
See `client.js <native_client/javascript/client.js>`_ for an example of how to use the bindings. Or download the `wav example <examples/nodejs_wav>`_.
|
||||
|
||||
Using the Command-Line client
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
To download the pre-built binaries for the ``deepspeech`` command-line (compiled C++) client, use ``util/taskcluster.py``\ :
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
python3 util/taskcluster.py --target .
|
||||
|
||||
or if you're on macOS:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
python3 util/taskcluster.py --arch osx --target .
|
||||
|
||||
also, if you need some binaries different than current master, like ``v0.2.0-alpha.6``\ , you can use ``--branch``\ :
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
python3 util/taskcluster.py --branch "v0.2.0-alpha.6" --target "."
|
||||
|
||||
The script ``taskcluster.py`` will download ``native_client.tar.xz`` (which includes the ``deepspeech`` binary, ``generate_trie`` and associated libraries) and extract it into the current folder. Also, ``taskcluster.py`` will download binaries for Linux/x86_64 by default, but you can override that behavior with the ``--arch`` parameter. See the help info with ``python util/taskcluster.py -h`` for more details. Specific branches of DeepSpeech or TensorFlow can be specified as well.
|
||||
|
||||
Note: the following command assumes you `downloaded the pre-trained model <#getting-the-pre-trained-model>`_.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
./deepspeech --model models/output_graph.pbmm --alphabet models/alphabet.txt --lm models/lm.binary --trie models/trie --audio audio_input.wav
|
||||
|
||||
See the help output with ``./deepspeech -h`` and the `native client README <native_client/README.md>`_ for more details.
|
||||
|
||||
Installing bindings from source
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
If pre-built binaries aren't available for your system, you'll need to install them from scratch. Follow these `\ ``native_client`` installation instructions <native_client/README.md>`_.
|
||||
|
||||
Third party bindings
|
||||
^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
In addition to the bindings above, third party developers have started to provide bindings to other languages:
|
||||
|
||||
|
||||
* `Asticode <https://github.com/asticode>`_ provides `Golang <https://golang.org>`_ bindings in its `go-astideepspeech <https://github.com/asticode/go-astideepspeech>`_ repo.
|
||||
* `RustAudio <https://github.com/RustAudio>`_ provide a `Rust <https://www.rust-lang.org>`_ binding, the installation and use of which is described in their `deepspeech-rs <https://github.com/RustAudio/deepspeech-rs>`_ repo.
|
||||
* `stes <https://github.com/stes>`_ provides preliminary `PKGBUILDs <https://wiki.archlinux.org/index.php/PKGBUILD>`_ to install the client and python bindings on `Arch Linux <https://www.archlinux.org/>`_ in the `arch-deepspeech <https://github.com/stes/arch-deepspeech>`_ repo.
|
||||
* `gst-deepspeech <https://github.com/Elleo/gst-deepspeech>`_ provides a `GStreamer <https://gstreamer.freedesktop.org/>`_ plugin which can be used from any language with GStreamer bindings.
|
||||
|
|
@ -1,3 +1,4 @@
|
|||
# Utility scripts
|
||||
Utility scripts
|
||||
===============
|
||||
|
||||
This folder contains scripts that can be used to do training on the various included importers from the command line. This is useful to be able to run training without a browser open, or unattended on a remote machine. They should be run from the base directory of the repository. Note that the default settings assume a very well-specified machine. In the situation that out-of-memory errors occur, you may find decreasing the values of `--train_batch_size`, `--dev_batch_size` and `--test_batch_size` will allow you to continue, at the expense of speed.
|
||||
This folder contains scripts that can be used to do training on the various included importers from the command line. This is useful to be able to run training without a browser open, or unattended on a remote machine. They should be run from the base directory of the repository. Note that the default settings assume a very well-specified machine. In the situation that out-of-memory errors occur, you may find decreasing the values of ``--train_batch_size``\ , ``--dev_batch_size`` and ``--test_batch_size`` will allow you to continue, at the expense of speed.
|
|
@ -1,9 +1,13 @@
|
|||
# Language-Specific Data
|
||||
Language-Specific Data
|
||||
======================
|
||||
|
||||
This directory contains language-specific data files. Most importantly, you will find here:
|
||||
|
||||
1. A list of unique characters for the target language (e.g. English) in `data/alphabet.txt`
|
||||
|
||||
2. A binary n-gram language model compiled by `kenlm` in `data/lm/lm.binary`
|
||||
3. A trie model compiled by [generate_trie](https://github.com/mozilla/DeepSpeech#using-the-command-line-client) in `data/lm/trie`
|
||||
|
||||
3. A trie model compiled by `generate_trie <https://github.com/mozilla/DeepSpeech#using-the-command-line-client>`_ in `data/lm/trie`
|
||||
|
||||
For more information on how to build these resources from scratch, see `data/lm/README.md`
|
||||
|
|
@ -1,45 +0,0 @@
|
|||
lm.binary was generated from the LibriSpeech normalized LM training text, available [here](http://www.openslr.org/11), following this recipe (Jupyter notebook code):
|
||||
|
||||
```python
|
||||
import gzip
|
||||
import io
|
||||
import os
|
||||
|
||||
from urllib import request
|
||||
|
||||
# Grab corpus.
|
||||
url = 'http://www.openslr.org/resources/11/librispeech-lm-norm.txt.gz'
|
||||
data_upper = '/tmp/upper.txt.gz'
|
||||
request.urlretrieve(url, data_upper)
|
||||
|
||||
# Convert to lowercase and cleanup.
|
||||
data_lower = '/tmp/lower.txt'
|
||||
with open(data_lower, 'w', encoding='utf-8') as lower:
|
||||
with io.TextIOWrapper(io.BufferedReader(gzip.open(data_upper)), encoding='utf8') as upper:
|
||||
for line in upper:
|
||||
lower.write(line.lower())
|
||||
|
||||
# Build pruned LM.
|
||||
lm_path = '/tmp/lm.arpa'
|
||||
!lmplz --order 5 \
|
||||
--temp_prefix /tmp/ \
|
||||
--memory 50% \
|
||||
--text {data_lower} \
|
||||
--arpa {lm_path} \
|
||||
--prune 0 0 0 1
|
||||
|
||||
# Quantize and produce trie binary.
|
||||
binary_path = '/tmp/lm.binary'
|
||||
!build_binary -a 255 \
|
||||
-q 8 \
|
||||
trie \
|
||||
{lm_path} \
|
||||
{binary_path}
|
||||
os.remove(lm_path)
|
||||
```
|
||||
|
||||
The trie was then generated from the vocabulary of the language model:
|
||||
|
||||
```bash
|
||||
./generate_trie ../data/alphabet.txt /tmp/lm.binary /tmp/trie
|
||||
```
|
|
@ -0,0 +1,46 @@
|
|||
|
||||
lm.binary was generated from the LibriSpeech normalized LM training text, available `here <http://www.openslr.org/11>`_\ , following this recipe (Jupyter notebook code):
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
import gzip
|
||||
import io
|
||||
import os
|
||||
|
||||
from urllib import request
|
||||
|
||||
# Grab corpus.
|
||||
url = 'http://www.openslr.org/resources/11/librispeech-lm-norm.txt.gz'
|
||||
data_upper = '/tmp/upper.txt.gz'
|
||||
request.urlretrieve(url, data_upper)
|
||||
|
||||
# Convert to lowercase and cleanup.
|
||||
data_lower = '/tmp/lower.txt'
|
||||
with open(data_lower, 'w', encoding='utf-8') as lower:
|
||||
with io.TextIOWrapper(io.BufferedReader(gzip.open(data_upper)), encoding='utf8') as upper:
|
||||
for line in upper:
|
||||
lower.write(line.lower())
|
||||
|
||||
# Build pruned LM.
|
||||
lm_path = '/tmp/lm.arpa'
|
||||
!lmplz --order 5 \
|
||||
--temp_prefix /tmp/ \
|
||||
--memory 50% \
|
||||
--text {data_lower} \
|
||||
--arpa {lm_path} \
|
||||
--prune 0 0 0 1
|
||||
|
||||
# Quantize and produce trie binary.
|
||||
binary_path = '/tmp/lm.binary'
|
||||
!build_binary -a 255 \
|
||||
-q 8 \
|
||||
trie \
|
||||
{lm_path} \
|
||||
{binary_path}
|
||||
os.remove(lm_path)
|
||||
|
||||
The trie was then generated from the vocabulary of the language model:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
./generate_trie ../data/alphabet.txt /tmp/lm.binary /tmp/trie
|
|
@ -0,0 +1,23 @@
|
|||
C API Usage example
|
||||
===================
|
||||
|
||||
Creating a model instance and loading model
|
||||
-------------------------------------------
|
||||
|
||||
.. literalinclude:: ../native_client/client.cc
|
||||
:language: c
|
||||
:linenos:
|
||||
:lines: 369-395
|
||||
|
||||
Performing inference
|
||||
--------------------
|
||||
|
||||
.. literalinclude:: ../native_client/client.cc
|
||||
:language: c
|
||||
:linenos:
|
||||
:lines: 55-106
|
||||
|
||||
Full source code
|
||||
----------------
|
||||
|
||||
See :download:`Full source code<../native_client/client.cc>`.
|
|
@ -0,0 +1,14 @@
|
|||
.Net API contributed examples
|
||||
=============================
|
||||
|
||||
DeepSpeechWPF
|
||||
-------------
|
||||
|
||||
This examples demonstrates using the .Net Framework DeepSpeech NuGet to build
|
||||
a graphical Windows application using DeepSpeech
|
||||
|
||||
.. literalinclude:: ../examples/net_framework/DeepSpeechWPF/MainWindow.xaml.cs
|
||||
:language: csharp
|
||||
:linenos:
|
||||
|
||||
Full source code available under `examples/net_framework/DeepSpeechWPF/`.
|
|
@ -0,0 +1,23 @@
|
|||
Java API Usage example
|
||||
======================
|
||||
|
||||
Creating a model instance and loading model
|
||||
-------------------------------------------
|
||||
|
||||
.. literalinclude:: ../native_client/java/app/src/main/java/org/mozilla/deepspeech/DeepSpeechActivity.java
|
||||
:language: java
|
||||
:linenos:
|
||||
:lines: 55
|
||||
|
||||
Performing inference
|
||||
--------------------
|
||||
|
||||
.. literalinclude:: ../native_client/java/app/src/main/java/org/mozilla/deepspeech/DeepSpeechActivity.java
|
||||
:language: java
|
||||
:linenos:
|
||||
:lines: 103
|
||||
|
||||
Full source code
|
||||
----------------
|
||||
|
||||
See :download:`Full source code<../native_client/java/app/src/main/java/org/mozilla/deepspeech/DeepSpeechActivity.java>`.
|
|
@ -0,0 +1,23 @@
|
|||
JavaScript API Usage example
|
||||
=============================
|
||||
|
||||
Creating a model instance and loading model
|
||||
-------------------------------------------
|
||||
|
||||
.. literalinclude:: ../native_client/javascript/client.js
|
||||
:language: javascript
|
||||
:linenos:
|
||||
:lines: 102-112
|
||||
|
||||
Performing inference
|
||||
--------------------
|
||||
|
||||
.. literalinclude:: ../native_client/javascript/client.js
|
||||
:language: javascript
|
||||
:linenos:
|
||||
:lines: 120-124
|
||||
|
||||
Full source code
|
||||
----------------
|
||||
|
||||
See :download:`Full source code<../native_client/javascript/client.js>`.
|
|
@ -0,0 +1,25 @@
|
|||
JavaScript contributed examples
|
||||
===============================
|
||||
|
||||
NodeJS WAV
|
||||
----------
|
||||
|
||||
This example demonstrates a very basic usage of the NodeJS API
|
||||
|
||||
.. literalinclude:: ../examples/nodejs_wav/index.js
|
||||
:language: javascript
|
||||
:linenos:
|
||||
|
||||
Full source code available under `../examples/nodejs_wav/`.
|
||||
|
||||
FFMPEG VAD Streaming
|
||||
--------------------
|
||||
|
||||
This example demonstrates using the Streaming API with ffmpeg to perform some
|
||||
Voice-Activity-Detection.
|
||||
|
||||
.. literalinclude:: ../examples/ffmpeg_vad_streaming/index.js
|
||||
:language: javascript
|
||||
:linenos:
|
||||
|
||||
Full source code available under `../examples/ffmpeg_vad_streaming/`.
|
|
@ -0,0 +1,23 @@
|
|||
Python API Usage example
|
||||
========================
|
||||
|
||||
Creating a model instance and loading model
|
||||
-------------------------------------------
|
||||
|
||||
.. literalinclude:: ../native_client/python/client.py
|
||||
:language: python
|
||||
:linenos:
|
||||
:lines: 80, 87
|
||||
|
||||
Performing inference
|
||||
--------------------
|
||||
|
||||
.. literalinclude:: ../native_client/python/client.py
|
||||
:language: python
|
||||
:linenos:
|
||||
:lines: 104-107
|
||||
|
||||
Full source code
|
||||
----------------
|
||||
|
||||
See :download:`Full source code<../native_client/python/client.py>`.
|
|
@ -0,0 +1,26 @@
|
|||
Python contributed examples
|
||||
===========================
|
||||
|
||||
Mic VAD Streaming
|
||||
-----------------
|
||||
|
||||
This example demonstrates getting audio from microphone, running
|
||||
Voice-Activity-Detection and then outputting text.
|
||||
|
||||
.. literalinclude:: ../examples/mic_vad_streaming/mic_vad_streaming.py
|
||||
:language: python
|
||||
:linenos:
|
||||
|
||||
Full source code available under `../examples/mic_vad_streaming/`.
|
||||
|
||||
VAD Transcriber
|
||||
---------------
|
||||
|
||||
This example demonstrates VAD-based transcription with both console and
|
||||
graphical interface.
|
||||
|
||||
.. literalinclude:: ../examples/vad_transcriber/wavTranscriber.py
|
||||
:language: python
|
||||
:linenos:
|
||||
|
||||
Full source code available under `../examples/vad_transcriber/wavTranscriber.py`.
|
|
@ -39,6 +39,28 @@ Welcome to DeepSpeech's documentation!
|
|||
|
||||
Python-API
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
:caption: Examples
|
||||
|
||||
C-Examples
|
||||
|
||||
NodeJS-Examples
|
||||
|
||||
Java-Examples
|
||||
|
||||
Python-Examples
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
:caption: Contributed examples
|
||||
|
||||
DotNet-contrib-examples.rst
|
||||
|
||||
NodeJS-contrib-Examples
|
||||
|
||||
Python-contrib-Examples
|
||||
|
||||
Indices and tables
|
||||
==================
|
||||
|
||||
|
|
|
@ -1,68 +0,0 @@
|
|||
# Microphone VAD Streaming
|
||||
|
||||
Stream from microphone to DeepSpeech, using VAD (voice activity detection). A fairly simple example demonstrating the DeepSpeech streaming API in Python. Also useful for quick, real-time testing of models and decoding parameters.
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
Uses portaudio for microphone access, so on Linux, you may need to install its header files to compile the `pyaudio` package:
|
||||
|
||||
```bash
|
||||
sudo apt install portaudio19-dev
|
||||
```
|
||||
|
||||
Installation on MacOS may fail due to portaudio, use brew to install it:
|
||||
|
||||
```bash
|
||||
brew install portaudio
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
```
|
||||
usage: mic_vad_streaming.py [-h] [-v VAD_AGGRESSIVENESS] [--nospinner]
|
||||
[-w SAVEWAV] -m MODEL [-a ALPHABET] [-l LM]
|
||||
[-t TRIE] [-nf N_FEATURES] [-nc N_CONTEXT]
|
||||
[-la LM_ALPHA] [-lb LM_BETA]
|
||||
[-bw BEAM_WIDTH]
|
||||
|
||||
Stream from microphone to DeepSpeech using VAD
|
||||
|
||||
optional arguments:
|
||||
-h, --help show this help message and exit
|
||||
-v VAD_AGGRESSIVENESS, --vad_aggressiveness VAD_AGGRESSIVENESS
|
||||
Set aggressiveness of VAD: an integer between 0 and 3,
|
||||
0 being the least aggressive about filtering out non-
|
||||
speech, 3 the most aggressive. Default: 3
|
||||
--nospinner Disable spinner
|
||||
-w SAVEWAV, --savewav SAVEWAV
|
||||
Save .wav files of utterences to given directory
|
||||
-m MODEL, --model MODEL
|
||||
Path to the model (protocol buffer binary file, or
|
||||
entire directory containing all standard-named files
|
||||
for model)
|
||||
-a ALPHABET, --alphabet ALPHABET
|
||||
Path to the configuration file specifying the alphabet
|
||||
used by the network. Default: alphabet.txt
|
||||
-l LM, --lm LM Path to the language model binary file. Default:
|
||||
lm.binary
|
||||
-t TRIE, --trie TRIE Path to the language model trie file created with
|
||||
native_client/generate_trie. Default: trie
|
||||
-nf N_FEATURES, --n_features N_FEATURES
|
||||
Number of MFCC features to use. Default: 26
|
||||
-nc N_CONTEXT, --n_context N_CONTEXT
|
||||
Size of the context window used for producing
|
||||
timesteps in the input vector. Default: 9
|
||||
-la LM_ALPHA, --lm_alpha LM_ALPHA
|
||||
The alpha hyperparameter of the CTC decoder. Language
|
||||
Model weight. Default: 0.75
|
||||
-lb LM_BETA, --lm_beta LM_BETA
|
||||
The beta hyperparameter of the CTC decoder. Word insertion
|
||||
bonus. Default: 1.85
|
||||
-bw BEAM_WIDTH, --beam_width BEAM_WIDTH
|
||||
Beam width used in the CTC decoder when building
|
||||
candidate transcriptions. Default: 500
|
||||
```
|
|
@ -0,0 +1,72 @@
|
|||
|
||||
Microphone VAD Streaming
|
||||
========================
|
||||
|
||||
Stream from microphone to DeepSpeech, using VAD (voice activity detection). A fairly simple example demonstrating the DeepSpeech streaming API in Python. Also useful for quick, real-time testing of models and decoding parameters.
|
||||
|
||||
Installation
|
||||
------------
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
pip install -r requirements.txt
|
||||
|
||||
Uses portaudio for microphone access, so on Linux, you may need to install its header files to compile the ``pyaudio`` package:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
sudo apt install portaudio19-dev
|
||||
|
||||
Installation on MacOS may fail due to portaudio, use brew to install it:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
brew install portaudio
|
||||
|
||||
Usage
|
||||
-----
|
||||
|
||||
.. code-block::
|
||||
|
||||
usage: mic_vad_streaming.py [-h] [-v VAD_AGGRESSIVENESS] [--nospinner]
|
||||
[-w SAVEWAV] -m MODEL [-a ALPHABET] [-l LM]
|
||||
[-t TRIE] [-nf N_FEATURES] [-nc N_CONTEXT]
|
||||
[-la LM_ALPHA] [-lb LM_BETA]
|
||||
[-bw BEAM_WIDTH]
|
||||
|
||||
Stream from microphone to DeepSpeech using VAD
|
||||
|
||||
optional arguments:
|
||||
-h, --help show this help message and exit
|
||||
-v VAD_AGGRESSIVENESS, --vad_aggressiveness VAD_AGGRESSIVENESS
|
||||
Set aggressiveness of VAD: an integer between 0 and 3,
|
||||
0 being the least aggressive about filtering out non-
|
||||
speech, 3 the most aggressive. Default: 3
|
||||
--nospinner Disable spinner
|
||||
-w SAVEWAV, --savewav SAVEWAV
|
||||
Save .wav files of utterences to given directory
|
||||
-m MODEL, --model MODEL
|
||||
Path to the model (protocol buffer binary file, or
|
||||
entire directory containing all standard-named files
|
||||
for model)
|
||||
-a ALPHABET, --alphabet ALPHABET
|
||||
Path to the configuration file specifying the alphabet
|
||||
used by the network. Default: alphabet.txt
|
||||
-l LM, --lm LM Path to the language model binary file. Default:
|
||||
lm.binary
|
||||
-t TRIE, --trie TRIE Path to the language model trie file created with
|
||||
native_client/generate_trie. Default: trie
|
||||
-nf N_FEATURES, --n_features N_FEATURES
|
||||
Number of MFCC features to use. Default: 26
|
||||
-nc N_CONTEXT, --n_context N_CONTEXT
|
||||
Size of the context window used for producing
|
||||
timesteps in the input vector. Default: 9
|
||||
-la LM_ALPHA, --lm_alpha LM_ALPHA
|
||||
The alpha hyperparameter of the CTC decoder. Language
|
||||
Model weight. Default: 0.75
|
||||
-lb LM_BETA, --lm_beta LM_BETA
|
||||
The beta hyperparameter of the CTC decoder. Word insertion
|
||||
bonus. Default: 1.85
|
||||
-bw BEAM_WIDTH, --beam_width BEAM_WIDTH
|
||||
Beam width used in the CTC decoder when building
|
||||
candidate transcriptions. Default: 500
|
|
@ -1,181 +0,0 @@
|
|||
# Building DeepSpeech Binaries
|
||||
|
||||
If you'd like to build the DeepSpeech binaries yourself, you'll need the following pre-requisites downloaded and installed:
|
||||
|
||||
* [Mozilla's TensorFlow `r1.14` branch](https://github.com/mozilla/tensorflow/tree/r1.14)
|
||||
* [General TensorFlow requirements](https://www.tensorflow.org/install/install_sources)
|
||||
* [libsox](https://sourceforge.net/projects/sox/)
|
||||
|
||||
It is required to use our fork of TensorFlow since it includes fixes for common problems encountered when building the native client files.
|
||||
|
||||
If you'd like to build the language bindings or the decoder package, you'll also need:
|
||||
|
||||
* [SWIG >= 3.0.12](http://www.swig.org/)
|
||||
* [node-pre-gyp](https://github.com/mapbox/node-pre-gyp) (for Node.JS bindings only)
|
||||
|
||||
|
||||
## Dependencies
|
||||
|
||||
If you follow these instructions, you should compile your own binaries of DeepSpeech (built on TensorFlow using Bazel).
|
||||
|
||||
For more information on configuring TensorFlow, read the docs up to the end of ["Configure the Build"](https://www.tensorflow.org/install/source#configure_the_build).
|
||||
|
||||
### TensorFlow: Clone & Checkout
|
||||
|
||||
Clone our fork of TensorFlow and checkout the correct version:
|
||||
|
||||
```
|
||||
git clone https://github.com/mozilla/tensorflow.git
|
||||
git checkout origin/r1.14
|
||||
```
|
||||
|
||||
### Bazel: Download & Install
|
||||
|
||||
First, [find the version of Bazel](https://www.tensorflow.org/install/source#tested_build_configurations) you need for this TensorFlow release. Next, [download and install the correct version of Bazel](https://docs.bazel.build/versions/master/install.html).
|
||||
|
||||
### TensorFlow: Configure with Bazel
|
||||
|
||||
After you have installed the correct version of Bazel, configure TensorFlow:
|
||||
|
||||
```
|
||||
cd tensorflow
|
||||
./configure
|
||||
```
|
||||
|
||||
## Compile DeepSpeech
|
||||
|
||||
### Compile `libdeepspeech.so` & `generate_trie`
|
||||
|
||||
Within your TensorFlow checkout, create a symbolic link to the DeepSpeech `native_client` directory. Assuming DeepSpeech and TensorFlow checkouts are in the same directory, do:
|
||||
|
||||
```
|
||||
cd tensorflow
|
||||
ln -s ../DeepSpeech/native_client ./
|
||||
```
|
||||
|
||||
You can now use Bazel to build the main DeepSpeech library, `libdeepspeech.so`, as well as the `generate_trie` binary. Add `--config=cuda` if you want a CUDA build.
|
||||
|
||||
```
|
||||
bazel build --workspace_status_command="bash native_client/bazel_workspace_status_cmd.sh" --config=monolithic -c opt --copt=-O3 --copt="-D_GLIBCXX_USE_CXX11_ABI=0" --copt=-fvisibility=hidden //native_client:libdeepspeech.so //native_client:generate_trie
|
||||
```
|
||||
|
||||
The generated binaries will be saved to `bazel-bin/native_client/`.
|
||||
|
||||
### Compile Language Bindings
|
||||
|
||||
Now, `cd` into the `DeepSpeech/native_client` directory and use the `Makefile` to build all the language bindings (C++ client, Python package, Nodejs package, etc.). Set the environment variable `TFDIR` to point to your TensorFlow checkout.
|
||||
|
||||
```
|
||||
TFDIR=~/tensorflow
|
||||
cd ../DeepSpeech/native_client
|
||||
make deepspeech
|
||||
```
|
||||
|
||||
|
||||
## Installing your own Binaries
|
||||
|
||||
After building, the library files and binary can optionally be installed to a system path for ease of development. This is also a required step for bindings generation.
|
||||
|
||||
```
|
||||
PREFIX=/usr/local sudo make install
|
||||
```
|
||||
|
||||
It is assumed that `$PREFIX/lib` is a valid library path, otherwise you may need to alter your environment.
|
||||
|
||||
### Install Python bindings
|
||||
|
||||
Included are a set of generated Python bindings. After following the above build and installation instructions, these can be installed by executing the following commands (or equivalent on your system):
|
||||
|
||||
```
|
||||
cd native_client/python
|
||||
make bindings
|
||||
pip install dist/deepspeech*
|
||||
```
|
||||
|
||||
The API mirrors the C++ API and is demonstrated in [client.py](python/client.py). Refer to [deepspeech.h](deepspeech.h) for documentation.
|
||||
|
||||
### Install Node.JS bindings
|
||||
|
||||
After following the above build and installation instructions, the Node.JS bindings can be built:
|
||||
|
||||
```
|
||||
cd native_client/javascript
|
||||
make build
|
||||
make npm-pack
|
||||
```
|
||||
|
||||
This will create the package `deepspeech-VERSION.tgz` in `native_client/javascript`.
|
||||
|
||||
### Install the CTC decoder package
|
||||
|
||||
To build the `ds_ctcdecoder` package, you'll need the general requirements listed above (in particular SWIG). The command below builds the bindings using eight (8) processes for compilation. Adjust the parameter accordingly for more or less parallelism.
|
||||
|
||||
```
|
||||
cd native_client/ctcdecode
|
||||
make bindings NUM_PROCESSES=8
|
||||
pip install dist/*.whl
|
||||
```
|
||||
|
||||
## Cross-building
|
||||
|
||||
### RPi3 ARMv7 and LePotato ARM64
|
||||
|
||||
We do support cross-compilation. Please refer to our `mozilla/tensorflow` fork, where we define the following `--config` flags:
|
||||
|
||||
- `--config=rpi3` and `--config=rpi3_opt` for Raspbian / ARMv7
|
||||
- `--config=rpi3-armv8` and `--config=rpi3-armv8_opt` for ARMBian / ARM64
|
||||
|
||||
So your command line for `RPi3` and `ARMv7` should look like:
|
||||
|
||||
```
|
||||
bazel build --workspace_status_command="bash native_client/bazel_workspace_status_cmd.sh" --config=monolithic --config=rpi3 --config=rpi3_opt -c opt --copt=-O3 --copt=-fvisibility=hidden //native_client:libdeepspeech.so //native_client:generate_trie
|
||||
```
|
||||
|
||||
And your command line for `LePotato` and `ARM64` should look like:
|
||||
|
||||
```
|
||||
bazel build --workspace_status_command="bash native_client/bazel_workspace_status_cmd.sh" --config=monolithic --config=rpi3-armv8 --config=rpi3-armv8_opt -c opt --copt=-O3 --copt=-fvisibility=hidden //native_client:libdeepspeech.so //native_client:generate_trie
|
||||
```
|
||||
|
||||
While we test only on RPi3 Raspbian Buster and LePotato ARMBian Buster, anything compatible with `armv7-a cortex-a53` or `armv8-a cortex-a53` should be fine.
|
||||
|
||||
The `deepspeech` binary can also be cross-built, with `TARGET=rpi3` or `TARGET=rpi3-armv8`. This might require you to setup a system tree using the tool `multistrap` and the multitrap configuration files: `native_client/multistrap_armbian64_buster.conf` and `native_client/multistrap_raspbian_buster.conf`.
|
||||
The path of the system tree can be overridden from the default values defined in `definitions.mk` through the `RASPBIAN` `make` variable.
|
||||
|
||||
```
|
||||
cd ../DeepSpeech/native_client
|
||||
make TARGET=<system> deepspeech
|
||||
```
|
||||
|
||||
### Android devices
|
||||
|
||||
We have preliminary support for Android relying on TensorFlow Lite, with Java and JNI bindinds. For more details on how to experiment with those, please refer to `native_client/java/README.md`.
|
||||
|
||||
Please refer to TensorFlow documentation on how to setup the environment to build for Android (SDK and NDK required).
|
||||
|
||||
You can build the `libdeepspeech.so` using (ARMv7):
|
||||
|
||||
```
|
||||
bazel build --workspace_status_command="bash native_client/bazel_workspace_status_cmd.sh" --config=monolithic --config=android --config=android_arm --define=runtime=tflite --action_env ANDROID_NDK_API_LEVEL=21 --cxxopt=-std=c++11 --copt=-D_GLIBCXX_USE_C99 //native_client:libdeepspeech.so
|
||||
```
|
||||
|
||||
Or (ARM64):
|
||||
|
||||
```
|
||||
bazel build --workspace_status_command="bash native_client/bazel_workspace_status_cmd.sh" --config=monolithic --config=android --config=android_arm64 --define=runtime=tflite --action_env ANDROID_NDK_API_LEVEL=21 --cxxopt=-std=c++11 --copt=-D_GLIBCXX_USE_C99 //native_client:libdeepspeech.so
|
||||
```
|
||||
|
||||
Building the `deepspeech` binary will happen through `ndk-build` (ARMv7):
|
||||
|
||||
```
|
||||
cd ../DeepSpeech/native_client
|
||||
$ANDROID_NDK_HOME/ndk-build APP_PLATFORM=android-21 APP_BUILD_SCRIPT=$(pwd)/Android.mk NDK_PROJECT_PATH=$(pwd) APP_STL=c++_shared TFDIR=$(pwd)/../../tensorflow/ TARGET_ARCH_ABI=armeabi-v7a
|
||||
```
|
||||
|
||||
And (ARM64):
|
||||
|
||||
```
|
||||
cd ../DeepSpeech/native_client
|
||||
$ANDROID_NDK_HOME/ndk-build APP_PLATFORM=android-21 APP_BUILD_SCRIPT=$(pwd)/Android.mk NDK_PROJECT_PATH=$(pwd) APP_STL=c++_shared TFDIR=$(pwd)/../../tensorflowx/ TARGET_ARCH_ABI=arm64-v8a
|
||||
```
|
||||
|
|
@ -0,0 +1,197 @@
|
|||
|
||||
Building DeepSpeech Binaries
|
||||
============================
|
||||
|
||||
If you'd like to build the DeepSpeech binaries yourself, you'll need the following pre-requisites downloaded and installed:
|
||||
|
||||
|
||||
* `Mozilla's TensorFlow ``r1.14`` branch <https://github.com/mozilla/tensorflow/tree/r1.14>`_
|
||||
* `General TensorFlow requirements <https://www.tensorflow.org/install/install_sources>`_
|
||||
* `libsox <https://sourceforge.net/projects/sox/>`_
|
||||
|
||||
It is required to use our fork of TensorFlow since it includes fixes for common problems encountered when building the native client files.
|
||||
|
||||
If you'd like to build the language bindings or the decoder package, you'll also need:
|
||||
|
||||
|
||||
* `SWIG >= 3.0.12 <http://www.swig.org/>`_
|
||||
* `node-pre-gyp <https://github.com/mapbox/node-pre-gyp>`_ (for Node.JS bindings only)
|
||||
|
||||
Dependencies
|
||||
------------
|
||||
|
||||
If you follow these instructions, you should compile your own binaries of DeepSpeech (built on TensorFlow using Bazel).
|
||||
|
||||
For more information on configuring TensorFlow, read the docs up to the end of `"Configure the Build" <https://www.tensorflow.org/install/source#configure_the_build>`_.
|
||||
|
||||
TensorFlow: Clone & Checkout
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Clone our fork of TensorFlow and checkout the correct version:
|
||||
|
||||
.. code-block::
|
||||
|
||||
git clone https://github.com/mozilla/tensorflow.git
|
||||
git checkout origin/r1.14
|
||||
|
||||
Bazel: Download & Install
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
First, `find the version of Bazel <https://www.tensorflow.org/install/source#tested_build_configurations>`_ you need for this TensorFlow release. Next, `download and install the correct version of Bazel <https://docs.bazel.build/versions/master/install.html>`_.
|
||||
|
||||
TensorFlow: Configure with Bazel
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
After you have installed the correct version of Bazel, configure TensorFlow:
|
||||
|
||||
.. code-block::
|
||||
|
||||
cd tensorflow
|
||||
./configure
|
||||
|
||||
Compile DeepSpeech
|
||||
------------------
|
||||
|
||||
Compile ``libdeepspeech.so`` & ``generate_trie``
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Within your TensorFlow checkout, create a symbolic link to the DeepSpeech ``native_client`` directory. Assuming DeepSpeech and TensorFlow checkouts are in the same directory, do:
|
||||
|
||||
.. code-block::
|
||||
|
||||
cd tensorflow
|
||||
ln -s ../DeepSpeech/native_client ./
|
||||
|
||||
You can now use Bazel to build the main DeepSpeech library, ``libdeepspeech.so``\ , as well as the ``generate_trie`` binary. Add ``--config=cuda`` if you want a CUDA build.
|
||||
|
||||
.. code-block::
|
||||
|
||||
bazel build --workspace_status_command="bash native_client/bazel_workspace_status_cmd.sh" --config=monolithic -c opt --copt=-O3 --copt="-D_GLIBCXX_USE_CXX11_ABI=0" --copt=-fvisibility=hidden //native_client:libdeepspeech.so //native_client:generate_trie
|
||||
|
||||
The generated binaries will be saved to ``bazel-bin/native_client/``.
|
||||
|
||||
Compile Language Bindings
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Now, ``cd`` into the ``DeepSpeech/native_client`` directory and use the ``Makefile`` to build all the language bindings (C++ client, Python package, Nodejs package, etc.). Set the environment variable ``TFDIR`` to point to your TensorFlow checkout.
|
||||
|
||||
.. code-block::
|
||||
|
||||
TFDIR=~/tensorflow
|
||||
cd ../DeepSpeech/native_client
|
||||
make deepspeech
|
||||
|
||||
Installing your own Binaries
|
||||
----------------------------
|
||||
|
||||
After building, the library files and binary can optionally be installed to a system path for ease of development. This is also a required step for bindings generation.
|
||||
|
||||
.. code-block::
|
||||
|
||||
PREFIX=/usr/local sudo make install
|
||||
|
||||
It is assumed that ``$PREFIX/lib`` is a valid library path, otherwise you may need to alter your environment.
|
||||
|
||||
Install Python bindings
|
||||
^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Included are a set of generated Python bindings. After following the above build and installation instructions, these can be installed by executing the following commands (or equivalent on your system):
|
||||
|
||||
.. code-block::
|
||||
|
||||
cd native_client/python
|
||||
make bindings
|
||||
pip install dist/deepspeech*
|
||||
|
||||
The API mirrors the C++ API and is demonstrated in `client.py <python/client.py>`_. Refer to `deepspeech.h <deepspeech.h>`_ for documentation.
|
||||
|
||||
Install Node.JS bindings
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
After following the above build and installation instructions, the Node.JS bindings can be built:
|
||||
|
||||
.. code-block::
|
||||
|
||||
cd native_client/javascript
|
||||
make build
|
||||
make npm-pack
|
||||
|
||||
This will create the package ``deepspeech-VERSION.tgz`` in ``native_client/javascript``.
|
||||
|
||||
Install the CTC decoder package
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
To build the ``ds_ctcdecoder`` package, you'll need the general requirements listed above (in particular SWIG). The command below builds the bindings using eight (8) processes for compilation. Adjust the parameter accordingly for more or less parallelism.
|
||||
|
||||
.. code-block::
|
||||
|
||||
cd native_client/ctcdecode
|
||||
make bindings NUM_PROCESSES=8
|
||||
pip install dist/*.whl
|
||||
|
||||
Cross-building
|
||||
--------------
|
||||
|
||||
RPi3 ARMv7 and LePotato ARM64
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
We do support cross-compilation. Please refer to our ``mozilla/tensorflow`` fork, where we define the following ``--config`` flags:
|
||||
|
||||
|
||||
* ``--config=rpi3`` and ``--config=rpi3_opt`` for Raspbian / ARMv7
|
||||
* ``--config=rpi3-armv8`` and ``--config=rpi3-armv8_opt`` for ARMBian / ARM64
|
||||
|
||||
So your command line for ``RPi3`` and ``ARMv7`` should look like:
|
||||
|
||||
.. code-block::
|
||||
|
||||
bazel build --workspace_status_command="bash native_client/bazel_workspace_status_cmd.sh" --config=monolithic --config=rpi3 --config=rpi3_opt -c opt --copt=-O3 --copt=-fvisibility=hidden //native_client:libdeepspeech.so //native_client:generate_trie
|
||||
|
||||
And your command line for ``LePotato`` and ``ARM64`` should look like:
|
||||
|
||||
.. code-block::
|
||||
|
||||
bazel build --workspace_status_command="bash native_client/bazel_workspace_status_cmd.sh" --config=monolithic --config=rpi3-armv8 --config=rpi3-armv8_opt -c opt --copt=-O3 --copt=-fvisibility=hidden //native_client:libdeepspeech.so //native_client:generate_trie
|
||||
|
||||
While we test only on RPi3 Raspbian Buster and LePotato ARMBian Buster, anything compatible with ``armv7-a cortex-a53`` or ``armv8-a cortex-a53`` should be fine.
|
||||
|
||||
The ``deepspeech`` binary can also be cross-built, with ``TARGET=rpi3`` or ``TARGET=rpi3-armv8``. This might require you to setup a system tree using the tool ``multistrap`` and the multitrap configuration files: ``native_client/multistrap_armbian64_buster.conf`` and ``native_client/multistrap_raspbian_buster.conf``.
|
||||
The path of the system tree can be overridden from the default values defined in ``definitions.mk`` through the ``RASPBIAN`` ``make`` variable.
|
||||
|
||||
.. code-block::
|
||||
|
||||
cd ../DeepSpeech/native_client
|
||||
make TARGET=<system> deepspeech
|
||||
|
||||
Android devices
|
||||
^^^^^^^^^^^^^^^
|
||||
|
||||
We have preliminary support for Android relying on TensorFlow Lite, with Java and JNI bindinds. For more details on how to experiment with those, please refer to ``native_client/java/README.md``.
|
||||
|
||||
Please refer to TensorFlow documentation on how to setup the environment to build for Android (SDK and NDK required).
|
||||
|
||||
You can build the ``libdeepspeech.so`` using (ARMv7):
|
||||
|
||||
.. code-block::
|
||||
|
||||
bazel build --workspace_status_command="bash native_client/bazel_workspace_status_cmd.sh" --config=monolithic --config=android --config=android_arm --define=runtime=tflite --action_env ANDROID_NDK_API_LEVEL=21 --cxxopt=-std=c++11 --copt=-D_GLIBCXX_USE_C99 //native_client:libdeepspeech.so
|
||||
|
||||
Or (ARM64):
|
||||
|
||||
.. code-block::
|
||||
|
||||
bazel build --workspace_status_command="bash native_client/bazel_workspace_status_cmd.sh" --config=monolithic --config=android --config=android_arm64 --define=runtime=tflite --action_env ANDROID_NDK_API_LEVEL=21 --cxxopt=-std=c++11 --copt=-D_GLIBCXX_USE_C99 //native_client:libdeepspeech.so
|
||||
|
||||
Building the ``deepspeech`` binary will happen through ``ndk-build`` (ARMv7):
|
||||
|
||||
.. code-block::
|
||||
|
||||
cd ../DeepSpeech/native_client
|
||||
$ANDROID_NDK_HOME/ndk-build APP_PLATFORM=android-21 APP_BUILD_SCRIPT=$(pwd)/Android.mk NDK_PROJECT_PATH=$(pwd) APP_STL=c++_shared TFDIR=$(pwd)/../../tensorflow/ TARGET_ARCH_ABI=armeabi-v7a
|
||||
|
||||
And (ARM64):
|
||||
|
||||
.. code-block::
|
||||
|
||||
cd ../DeepSpeech/native_client
|
||||
$ANDROID_NDK_HOME/ndk-build APP_PLATFORM=android-21 APP_BUILD_SCRIPT=$(pwd)/Android.mk NDK_PROJECT_PATH=$(pwd) APP_STL=c++_shared TFDIR=$(pwd)/../../tensorflowx/ TARGET_ARCH_ABI=arm64-v8a
|
|
@ -1,125 +0,0 @@
|
|||
# Building DeepSpeech native client for Windows
|
||||
|
||||
Now we can build the native client of DeepSpeech and run inference on Windows using the C# client, to do that we need to compile the `native_client`.
|
||||
|
||||
**Table of Contents**
|
||||
|
||||
- [Prerequisites](#prerequisites)
|
||||
- [Getting the code](#getting-the-code)
|
||||
- [Configuring the paths](#configuring-the-paths)
|
||||
- [Adding environment variables](#adding-environment-variables)
|
||||
- [MSYS2 paths](#msys2-paths)
|
||||
- [BAZEL path](#bazel-path)
|
||||
- [Python path](#python-path)
|
||||
- [CUDA paths](#cuda-paths)
|
||||
- [Building the native_client](#building-the-native_client)
|
||||
- [Build for CPU](#cpu)
|
||||
- [Build with CUDA support](#gpu-with-cuda)
|
||||
- [Using the generated library](#using-the-generated-library)
|
||||
|
||||
## Prerequisites
|
||||
|
||||
* Windows 10
|
||||
* [Windows 10 SDK](https://developer.microsoft.com/en-us/windows/downloads/windows-10-sdk)
|
||||
* [Visual Studio 2017 Community](https://visualstudio.microsoft.com/vs/community/)
|
||||
* [Git Large File Storage](https://git-lfs.github.com/)
|
||||
* [TensorFlow Windows pre-requisites](https://www.tensorflow.org/install/source_windows)
|
||||
|
||||
Inside the Visual Studio Installer enable `MS Build Tools` and `VC++ 2015.3 v14.00 (v140) toolset for desktop`.
|
||||
|
||||
If you want to enable CUDA support you need to follow the steps in [the TensorFlow docs for building on Windows with CUDA](https://www.tensorflow.org/install/gpu#windows_setup).
|
||||
|
||||
We highly recommend sticking to the recommended versions of CUDA/cuDNN in order to avoid compilation errors caused by incompatible versions. We only test with the versions recommended by TensorFlow.
|
||||
|
||||
## Getting the code
|
||||
|
||||
We need to clone `mozilla/DeepSpeech` and `mozilla/tensorflow`.
|
||||
|
||||
```bash
|
||||
git clone https://github.com/mozilla/DeepSpeech
|
||||
```
|
||||
|
||||
```bash
|
||||
git clone --branch r1.14 https://github.com/mozilla/tensorflow
|
||||
```
|
||||
|
||||
## Configuring the paths
|
||||
|
||||
We need to create a symbolic link, for this example let's suppose that we cloned into `D:\cloned` and now the structure looks like:
|
||||
|
||||
.
|
||||
├── D:\
|
||||
│ ├── cloned # Contains DeepSpeech and tensorflow side by side
|
||||
│ │ ├── DeepSpeech # Root of the cloned DeepSpeech
|
||||
│ │ ├── tensorflow # Root of the cloned Mozilla's tensorflow
|
||||
└── ...
|
||||
|
||||
Change your path accordingly to your path structure, for the structure above we are going to use the following command:
|
||||
|
||||
```bash
|
||||
mklink /d "D:\cloned\tensorflow\native_client" "D:\cloned\DeepSpeech\native_client"
|
||||
```
|
||||
|
||||
## Adding environment variables
|
||||
|
||||
After you have installed the requirements there are few environment variables that we need to add to our `PATH` variable of the system variables.
|
||||
|
||||
#### MSYS2 paths
|
||||
|
||||
For MSYS2 we need to add `bin` directory, if you installed in the default route the path that we need to add should looks like `C:\msys64\usr\bin`. Now we can run `pacman`:
|
||||
|
||||
```bash
|
||||
pacman -Syu
|
||||
pacman -Su
|
||||
pacman -S patch unzip
|
||||
```
|
||||
|
||||
#### BAZEL path
|
||||
|
||||
For BAZEL we need to add the path to the executable, make sure you rename the executable to `bazel`.
|
||||
|
||||
To check the version installed you can run:
|
||||
|
||||
```bash
|
||||
bazel version
|
||||
```
|
||||
|
||||
#### PYTHON path
|
||||
|
||||
Add your `python.exe` path to the `PATH` variable.
|
||||
|
||||
|
||||
#### CUDA paths
|
||||
|
||||
If you run CUDA enabled `native_client` we need to add the following to the `PATH` variable.
|
||||
|
||||
```
|
||||
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.0\bin
|
||||
```
|
||||
|
||||
### Building the native_client
|
||||
|
||||
There's one last command to run before building, you need to run the [configure.py](https://github.com/mozilla/tensorflow/blob/master/configure.py) inside `tensorflow` cloned directory.
|
||||
|
||||
At this point we are ready to start building the `native_client`, go to `tensorflow` directory that you cloned, following our examples should be `D:\cloned\tensorflow`.
|
||||
|
||||
#### CPU
|
||||
We will add AVX/AVX2 support in the command, please make sure that your CPU supports these instructions before adding the flags, if not you can remove them.
|
||||
|
||||
```bash
|
||||
bazel build --workspace_status_command="bash native_client/bazel_workspace_status_cmd.sh" -c opt --copt=/arch:AVX --copt=/arch:AVX2 //native_client:libdeepspeech.so
|
||||
```
|
||||
|
||||
#### GPU with CUDA
|
||||
If you enabled CUDA in [configure.py](https://github.com/mozilla/tensorflow/blob/master/configure.py) configuration command now you can add `--config=cuda` to compile with CUDA support.
|
||||
|
||||
```bash
|
||||
bazel build --workspace_status_command="bash native_client/bazel_workspace_status_cmd.sh" -c opt --config=cuda --copt=/arch:AVX --copt=/arch:AVX2 //native_client:libdeepspeech.so
|
||||
```
|
||||
|
||||
Be patient, if you enabled AVX/AVX2 and CUDA it will take a long time. Finally you should see it stops and shows the path to the generated `libdeepspeech.so`.
|
||||
|
||||
|
||||
## Using the generated library
|
||||
|
||||
As for now we can only use the generated `libdeepspeech.so` with the C# clients, go to [native_client/dotnet/](https://github.com/mozilla/DeepSpeech/tree/master/native_client/dotnet) in your DeepSpeech directory and open the Visual Studio solution, then we need to build in debug or release mode, finally we just need to copy `libdeepspeech.so` to the generated `x64/Debug` or `x64/Release` directory.
|
|
@ -0,0 +1,148 @@
|
|||
|
||||
Building DeepSpeech native client for Windows
|
||||
=============================================
|
||||
|
||||
Now we can build the native client of DeepSpeech and run inference on Windows using the C# client, to do that we need to compile the ``native_client``.
|
||||
|
||||
**Table of Contents**
|
||||
|
||||
|
||||
* `Prerequisites <#prerequisites>`_
|
||||
* `Getting the code <#getting-the-code>`_
|
||||
* `Configuring the paths <#configuring-the-paths>`_
|
||||
* `Adding environment variables <#adding-environment-variables>`_
|
||||
|
||||
* `MSYS2 paths <#msys2-paths>`_
|
||||
* `BAZEL path <#bazel-path>`_
|
||||
* `Python path <#python-path>`_
|
||||
* `CUDA paths <#cuda-paths>`_
|
||||
|
||||
* `Building the native_client <#building-the-native_client>`_
|
||||
|
||||
* `Build for CPU <#cpu>`_
|
||||
* `Build with CUDA support <#gpu-with-cuda>`_
|
||||
|
||||
* `Using the generated library <#using-the-generated-library>`_
|
||||
|
||||
Prerequisites
|
||||
-------------
|
||||
|
||||
|
||||
* Windows 10
|
||||
* `Windows 10 SDK <https://developer.microsoft.com/en-us/windows/downloads/windows-10-sdk>`_
|
||||
* `Visual Studio 2017 Community <https://visualstudio.microsoft.com/vs/community/>`_
|
||||
* `Git Large File Storage <https://git-lfs.github.com/>`_
|
||||
* `TensorFlow Windows pre-requisites <https://www.tensorflow.org/install/source_windows>`_
|
||||
|
||||
Inside the Visual Studio Installer enable ``MS Build Tools`` and ``VC++ 2015.3 v14.00 (v140) toolset for desktop``.
|
||||
|
||||
If you want to enable CUDA support you need to follow the steps in `the TensorFlow docs for building on Windows with CUDA <https://www.tensorflow.org/install/gpu#windows_setup>`_.
|
||||
|
||||
We highly recommend sticking to the recommended versions of CUDA/cuDNN in order to avoid compilation errors caused by incompatible versions. We only test with the versions recommended by TensorFlow.
|
||||
|
||||
Getting the code
|
||||
----------------
|
||||
|
||||
We need to clone ``mozilla/DeepSpeech`` and ``mozilla/tensorflow``.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
git clone https://github.com/mozilla/DeepSpeech
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
git clone --branch r1.14 https://github.com/mozilla/tensorflow
|
||||
|
||||
Configuring the paths
|
||||
---------------------
|
||||
|
||||
We need to create a symbolic link, for this example let's suppose that we cloned into ``D:\cloned`` and now the structure looks like:
|
||||
|
||||
.. code-block::
|
||||
|
||||
.
|
||||
├── D:\
|
||||
│ ├── cloned # Contains DeepSpeech and tensorflow side by side
|
||||
│ │ ├── DeepSpeech # Root of the cloned DeepSpeech
|
||||
│ │ ├── tensorflow # Root of the cloned Mozilla's tensorflow
|
||||
└── ...
|
||||
|
||||
|
||||
Change your path accordingly to your path structure, for the structure above we are going to use the following command:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
mklink /d "D:\cloned\tensorflow\native_client" "D:\cloned\DeepSpeech\native_client"
|
||||
|
||||
Adding environment variables
|
||||
----------------------------
|
||||
|
||||
After you have installed the requirements there are few environment variables that we need to add to our ``PATH`` variable of the system variables.
|
||||
|
||||
MSYS2 paths
|
||||
~~~~~~~~~~~
|
||||
|
||||
For MSYS2 we need to add ``bin`` directory, if you installed in the default route the path that we need to add should looks like ``C:\msys64\usr\bin``. Now we can run ``pacman``\ :
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
pacman -Syu
|
||||
pacman -Su
|
||||
pacman -S patch unzip
|
||||
|
||||
BAZEL path
|
||||
~~~~~~~~~~
|
||||
|
||||
For BAZEL we need to add the path to the executable, make sure you rename the executable to ``bazel``.
|
||||
|
||||
To check the version installed you can run:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
bazel version
|
||||
|
||||
PYTHON path
|
||||
~~~~~~~~~~~
|
||||
|
||||
Add your ``python.exe`` path to the ``PATH`` variable.
|
||||
|
||||
CUDA paths
|
||||
~~~~~~~~~~
|
||||
|
||||
If you run CUDA enabled ``native_client`` we need to add the following to the ``PATH`` variable.
|
||||
|
||||
.. code-block::
|
||||
|
||||
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.0\bin
|
||||
|
||||
Building the native_client
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
There's one last command to run before building, you need to run the `configure.py <https://github.com/mozilla/tensorflow/blob/master/configure.py>`_ inside ``tensorflow`` cloned directory.
|
||||
|
||||
At this point we are ready to start building the ``native_client``\ , go to ``tensorflow`` directory that you cloned, following our examples should be ``D:\cloned\tensorflow``.
|
||||
|
||||
CPU
|
||||
~~~
|
||||
|
||||
We will add AVX/AVX2 support in the command, please make sure that your CPU supports these instructions before adding the flags, if not you can remove them.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
bazel build --workspace_status_command="bash native_client/bazel_workspace_status_cmd.sh" -c opt --copt=/arch:AVX --copt=/arch:AVX2 //native_client:libdeepspeech.so
|
||||
|
||||
GPU with CUDA
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
If you enabled CUDA in `configure.py <https://github.com/mozilla/tensorflow/blob/master/configure.py>`_ configuration command now you can add ``--config=cuda`` to compile with CUDA support.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
bazel build --workspace_status_command="bash native_client/bazel_workspace_status_cmd.sh" -c opt --config=cuda --copt=/arch:AVX --copt=/arch:AVX2 //native_client:libdeepspeech.so
|
||||
|
||||
Be patient, if you enabled AVX/AVX2 and CUDA it will take a long time. Finally you should see it stops and shows the path to the generated ``libdeepspeech.so``.
|
||||
|
||||
Using the generated library
|
||||
---------------------------
|
||||
|
||||
As for now we can only use the generated ``libdeepspeech.so`` with the C# clients, go to `native_client/dotnet/ <https://github.com/mozilla/DeepSpeech/tree/master/native_client/dotnet>`_ in your DeepSpeech directory and open the Visual Studio solution, then we need to build in debug or release mode, finally we just need to copy ``libdeepspeech.so`` to the generated ``x64/Debug`` or ``x64/Release`` directory.
|
|
@ -1,64 +0,0 @@
|
|||
DeepSpeech Java / Android bindings
|
||||
==================================
|
||||
|
||||
This is still preliminary work. Please refer to `native_client/README.md` for
|
||||
building `libdeepspeech.so` and `deepspeech` binary for Android on ARMv7 and
|
||||
ARM64 arch.
|
||||
|
||||
Android Java / JNI bindings: `libdeepspeech`
|
||||
===========================================
|
||||
Java / JNI bindings are available under the `libdeepspeech` subdirectory.
|
||||
Building depends on prebuilt shared object. Please ensure to place
|
||||
`libdeepspeech.so` into the `libdeepspeech/libs/{arm64-v8a,armeabi-v7a}/`
|
||||
matching subdirectories.
|
||||
|
||||
Building the bindings is managed by `gradle` and should be limited to issuing
|
||||
`./gradlew libdeepspeech:build`, producing an `AAR` package in
|
||||
`./libdeepspeech/build/outputs/aar/`. This can later be used by other
|
||||
Gradle-based build with the following configuration:
|
||||
```
|
||||
implementation 'deepspeech.mozilla.org:libdeepspeech:VERSION@aar'
|
||||
```
|
||||
|
||||
Please note that you might have to copy the file to a local Maven repository
|
||||
and adapt file naming (when missing, the error message should states what
|
||||
filename it expects and where).
|
||||
|
||||
Android demo APK
|
||||
================
|
||||
Provided is a very simple Android demo app that allows you to test the library.
|
||||
You can build it with `make apk` and install the resulting APK file. Please
|
||||
refer to Gradle documentation for more details.
|
||||
|
||||
The `APK` should be produced in `/app/build/outputs/apk/`. This demo app might
|
||||
require external storage permissions. You can then push models files to your
|
||||
device, set the path to the file in the UI and try to run on an audio file.
|
||||
When running, it should first play the audio file and then run the decoding. At
|
||||
the end of the decoding, you should be presented with the decoded text as well
|
||||
as time elapsed to decode in miliseconds.
|
||||
|
||||
Running `deepspeech` via adb
|
||||
============================
|
||||
You should use `adb push` to send data to device, please refer to Android
|
||||
documentation on how to use that.
|
||||
|
||||
Please push DeepSpeech data to `/sdcard/deepspeech/`, including:
|
||||
- `output_graph.tflite` which is the TF Lite model
|
||||
- `alphabet.txt`
|
||||
- `lm.binary` and `trie` files, if you want to use the language model ; please
|
||||
be aware that too big language model will make the device run out of memory
|
||||
|
||||
Then, push binaries from `native_client.tar.xz` to `/data/local/tmp/ds`:
|
||||
- `deepspeech`
|
||||
- `libdeepspeech.so`
|
||||
- `libc++_shared.so`
|
||||
|
||||
You should then be able to run as usual, using a shell from `adb shell`:
|
||||
```
|
||||
user@device$ cd /data/local/tmp/ds/
|
||||
user@device$ LD_LIBRARY_PATH=$(pwd)/ ./deepspeech [...]
|
||||
```
|
||||
|
||||
Please note that Android linker does not support `rpath` so you have to set
|
||||
`LD_LIBRARY_PATH`. Properly wrapped / packaged bindings does embed the library
|
||||
at a place the linker knows where to search, so Android apps will be fine.
|
|
@ -0,0 +1,74 @@
|
|||
|
||||
DeepSpeech Java / Android bindings
|
||||
==================================
|
||||
|
||||
This is still preliminary work. Please refer to ``native_client/README.md`` for
|
||||
building ``libdeepspeech.so`` and ``deepspeech`` binary for Android on ARMv7 and
|
||||
ARM64 arch.
|
||||
|
||||
Android Java / JNI bindings: ``libdeepspeech``
|
||||
==================================================
|
||||
|
||||
Java / JNI bindings are available under the ``libdeepspeech`` subdirectory.
|
||||
Building depends on prebuilt shared object. Please ensure to place
|
||||
``libdeepspeech.so`` into the ``libdeepspeech/libs/{arm64-v8a,armeabi-v7a}/``
|
||||
matching subdirectories.
|
||||
|
||||
Building the bindings is managed by ``gradle`` and should be limited to issuing
|
||||
``./gradlew libdeepspeech:build``\ , producing an ``AAR`` package in
|
||||
``./libdeepspeech/build/outputs/aar/``. This can later be used by other
|
||||
Gradle-based build with the following configuration:
|
||||
|
||||
.. code-block::
|
||||
|
||||
implementation 'deepspeech.mozilla.org:libdeepspeech:VERSION@aar'
|
||||
|
||||
Please note that you might have to copy the file to a local Maven repository
|
||||
and adapt file naming (when missing, the error message should states what
|
||||
filename it expects and where).
|
||||
|
||||
Android demo APK
|
||||
================
|
||||
|
||||
Provided is a very simple Android demo app that allows you to test the library.
|
||||
You can build it with ``make apk`` and install the resulting APK file. Please
|
||||
refer to Gradle documentation for more details.
|
||||
|
||||
The ``APK`` should be produced in ``/app/build/outputs/apk/``. This demo app might
|
||||
require external storage permissions. You can then push models files to your
|
||||
device, set the path to the file in the UI and try to run on an audio file.
|
||||
When running, it should first play the audio file and then run the decoding. At
|
||||
the end of the decoding, you should be presented with the decoded text as well
|
||||
as time elapsed to decode in miliseconds.
|
||||
|
||||
Running ``deepspeech`` via adb
|
||||
==================================
|
||||
|
||||
You should use ``adb push`` to send data to device, please refer to Android
|
||||
documentation on how to use that.
|
||||
|
||||
Please push DeepSpeech data to ``/sdcard/deepspeech/``\ , including:
|
||||
|
||||
|
||||
* ``output_graph.tflite`` which is the TF Lite model
|
||||
* ``alphabet.txt``
|
||||
* ``lm.binary`` and ``trie`` files, if you want to use the language model ; please
|
||||
be aware that too big language model will make the device run out of memory
|
||||
|
||||
Then, push binaries from ``native_client.tar.xz`` to ``/data/local/tmp/ds``\ :
|
||||
|
||||
|
||||
* ``deepspeech``
|
||||
* ``libdeepspeech.so``
|
||||
* ``libc++_shared.so``
|
||||
|
||||
You should then be able to run as usual, using a shell from ``adb shell``\ :
|
||||
|
||||
.. code-block::
|
||||
|
||||
user@device$ cd /data/local/tmp/ds/
|
||||
user@device$ LD_LIBRARY_PATH=$(pwd)/ ./deepspeech [...]
|
||||
|
||||
Please note that Android linker does not support ``rpath`` so you have to set
|
||||
``LD_LIBRARY_PATH``. Properly wrapped / packaged bindings does embed the library
|
||||
at a place the linker knows where to search, so Android apps will be fine.
|
|
@ -1,9 +0,0 @@
|
|||
Javadoc for Sphinx
|
||||
==================
|
||||
|
||||
This code is only here for reference for documentation generation.
|
||||
|
||||
To update, please build SWIG (4.0 at least) and then run from native_client/java:
|
||||
```
|
||||
swig -c++ -java -doxygen -package org.mozilla.deepspeech.libdeepspeech -outdir libdeepspeech/src/main/java/org/mozilla/deepspeech/libdeepspeech_doc -o jni/deepspeech_wrap.cpp jni/deepspeech.i
|
||||
```
|
|
@ -0,0 +1,11 @@
|
|||
|
||||
Javadoc for Sphinx
|
||||
==================
|
||||
|
||||
This code is only here for reference for documentation generation.
|
||||
|
||||
To update, please build SWIG (4.0 at least) and then run from native_client/java:
|
||||
|
||||
.. code-block::
|
||||
|
||||
swig -c++ -java -doxygen -package org.mozilla.deepspeech.libdeepspeech -outdir libdeepspeech/src/main/java/org/mozilla/deepspeech/libdeepspeech_doc -o jni/deepspeech_wrap.cpp jni/deepspeech.i
|
|
@ -13,7 +13,7 @@ endif
|
|||
default: build
|
||||
|
||||
clean:
|
||||
rm -f deepspeech_wrap.cxx package.json README.md
|
||||
rm -f deepspeech_wrap.cxx package.json README.rst
|
||||
rm -rf ./build/
|
||||
|
||||
clean-npm-pack:
|
||||
|
@ -23,8 +23,8 @@ clean-npm-pack:
|
|||
really-clean: clean clean-npm-pack
|
||||
rm -fr ./lib/
|
||||
|
||||
README.md:
|
||||
cp ../../README.md README.md
|
||||
README.rst:
|
||||
cp ../../README.rst README.rst
|
||||
|
||||
package.json: package.json.in
|
||||
sed \
|
||||
|
@ -44,7 +44,7 @@ copy-deps: build
|
|||
node-wrapper: copy-deps build
|
||||
$(NODE_BUILD_TOOL) $(NODE_PLATFORM_TARGET) $(NODE_RUNTIME) $(NODE_ABI_TARGET) $(NODE_DIST_URL) package $(NODE_BUILD_VERBOSE)
|
||||
|
||||
npm-pack: clean package.json README.md index.js
|
||||
npm-pack: clean package.json README.rst index.js
|
||||
npm install node-pre-gyp@0.13.x
|
||||
npm pack $(NODE_BUILD_VERBOSE)
|
||||
|
||||
|
|
|
@ -0,0 +1,601 @@
|
|||
Project DeepSpeech
|
||||
==================
|
||||
|
||||
[![Task Status](https://github.taskcluster.net/v1/repository/mozilla/DeepSpeech/master/badge.svg)](https://github.taskcluster.net/v1/repository/mozilla/DeepSpeech/master/latest)
|
||||
|
||||
DeepSpeech is an open source Speech-To-Text engine, using a model trained by machine learning techniques based on `Baidu's Deep Speech research paper](https://arxiv.org/abs/1412.5567). Project DeepSpeech uses Google's [TensorFlow <https://www.tensorflow.org/>`_. Project DeepSpeech uses Google's `TensorFlow <https://www.tensorflow.org/>`_ to make the implementation easier.
|
||||
|
||||
!`Usage <images/usage.gif>`_
|
||||
|
||||
Pre-built binaries for performing inference with a trained model can be installed with `pip3`. Proper setup using a virtual environment is recommended, and you can find that documentation `below <#using-the-python-package>`_.
|
||||
|
||||
A pre-trained English model is available for use and can be downloaded using `the instructions below <#getting-the-pre-trained-model>`_. Currently, only 16-bit, 16 kHz, mono-channel WAVE audio files are supported in the Python client.
|
||||
|
||||
Once everything is installed, you can then use the `deepspeech` binary to do speech-to-text on short (approximately 5-second long) audio files as such:
|
||||
|
||||
```bash
|
||||
|
||||
pip3 install deepspeech
|
||||
|
||||
deepspeech --model models/output*graph.pbmm --alphabet models/alphabet.txt --lm models/lm.binary --trie models/trie --audio my*audio_file.wav
|
||||
|
||||
```
|
||||
|
||||
Alternatively, quicker inference can be performed using a supported NVIDIA GPU on Linux. See the `release notes <https://github.com/mozilla/DeepSpeech/releases>`_ to find which GPUs are supported. To run `deepspeech` on a GPU, install the GPU specific package:
|
||||
|
||||
```bash
|
||||
|
||||
pip3 install deepspeech-gpu
|
||||
|
||||
deepspeech --model models/output*graph.pbmm --alphabet models/alphabet.txt --lm models/lm.binary --trie models/trie --audio my*audio_file.wav
|
||||
|
||||
```
|
||||
|
||||
Please ensure you have the required `CUDA dependency <#cuda-dependency>`_.
|
||||
|
||||
See the output of `deepspeech -h` for more information on the use of `deepspeech`. (If you experience problems running `deepspeech`, please check `required runtime dependencies <native*client/README.rst#required-dependencies>`*).
|
||||
|
||||
**Table of Contents**
|
||||
|
||||
- `Prerequisites <#prerequisites>`_
|
||||
- `Getting the code <#getting-the-code>`_
|
||||
- `Using a Pre-trained Model <#using-a-pre-trained-model>`_
|
||||
- `CUDA dependency <#cuda-dependency>`_
|
||||
|
||||
- `Getting the pre-trained model <#getting-the-pre-trained-model>`_
|
||||
|
||||
- `Model compatibility <#model-compatibility>`_
|
||||
|
||||
- `Using the Python package <#using-the-python-package>`_
|
||||
|
||||
- `Using the Node.JS package <#using-the-nodejs-package>`_
|
||||
|
||||
- `Using the Command Line client <#using-the-command-line-client>`_
|
||||
|
||||
- `Installing bindings from source <#installing-bindings-from-source>`_
|
||||
|
||||
- `Third party bindings <#third-party-bindings>`_
|
||||
- `Training your own Model <#training-your-own-model>`_
|
||||
- `Installing training prerequisites <#installing-training-prerequisites>`_
|
||||
|
||||
- `Recommendations <#recommendations>`_
|
||||
|
||||
- `Common Voice training data <#common-voice-training-data>`_
|
||||
|
||||
- `Training a model <#training-a-model>`_
|
||||
|
||||
- `Checkpointing <#checkpointing>`_
|
||||
|
||||
- `Exporting a model for inference <#exporting-a-model-for-inference>`_
|
||||
|
||||
- `Exporting a model for TFLite <#exporting-a-model-for-tflite>`_
|
||||
|
||||
- `Making a mmap-able model for inference <#making-a-mmap-able-model-for-inference>`_
|
||||
|
||||
- `Continuing training from a release model <#continuing-training-from-a-release-model>`_
|
||||
- `Contribution guidelines <#contribution-guidelines>`_
|
||||
- `Contact/Getting Help <#contactgetting-help>`_
|
||||
|
||||
# Prerequisites
|
||||
===============
|
||||
|
||||
* `Python 3.6 <https://www.python.org/>`_
|
||||
|
||||
* `Git Large File Storage <https://git-lfs.github.com/>`_
|
||||
|
||||
* Mac or Linux environment
|
||||
|
||||
* Go to `build README <examples/net*framework/README.rst>`* to start building DeepSpeech for Windows from source.
|
||||
|
||||
# Getting the code
|
||||
==================
|
||||
|
||||
Install `Git Large File Storage <https://git-lfs.github.com/>`_ either manually or through a package-manager if available on your system. Then clone the DeepSpeech repository normally:
|
||||
|
||||
```bash
|
||||
|
||||
git clone https://github.com/mozilla/DeepSpeech
|
||||
|
||||
```
|
||||
|
||||
|
||||
# Using a Pre-trained Model
|
||||
===========================
|
||||
|
||||
There are three ways to use DeepSpeech inference:
|
||||
|
||||
- `The Python package <#using-the-python-package>`_
|
||||
- `The Node.JS package <#using-the-nodejs-package>`_
|
||||
- `The Command-Line client <#using-the-command-line-client>`_
|
||||
|
||||
Running `deepspeech` might require some runtime dependencies to be already installed on your system. Regardless of which bindings you are using, you will need the following:
|
||||
|
||||
* libsox2
|
||||
|
||||
* libstdc++6
|
||||
|
||||
* libgomp1
|
||||
|
||||
* libpthread
|
||||
|
||||
Please refer to your system's documentation on how to install these dependencies.
|
||||
|
||||
|
||||
## CUDA dependency
|
||||
==================
|
||||
|
||||
The GPU capable builds (Python, NodeJS, C++, etc) depend on the same CUDA runtime as upstream TensorFlow. Currently with TensorFlow 1.13 it depends on CUDA 10.0 and CuDNN v7.5.
|
||||
|
||||
## Getting the pre-trained model
|
||||
================================
|
||||
|
||||
If you want to use the pre-trained English model for performing speech-to-text, you can download it (along with other important inference material) from the DeepSpeech `releases page <https://github.com/mozilla/DeepSpeech/releases>`_. Alternatively, you can run the following command to download and unzip the model files in your current directory:
|
||||
|
||||
```bash
|
||||
|
||||
wget https://github.com/mozilla/DeepSpeech/releases/download/v0.5.1/deepspeech-0.5.1-models.tar.gz
|
||||
|
||||
tar xvfz deepspeech-0.5.1-models.tar.gz
|
||||
|
||||
```
|
||||
|
||||
## Model compatibility
|
||||
======================
|
||||
|
||||
DeepSpeech models are versioned to keep you from trying to use an incompatible graph with a newer client after a breaking change was made to the code. If you get an error saying your model file version is too old for the client, you should either upgrade to a newer model release, re-export your model from the checkpoint using a newer version of the code, or downgrade your client if you need to use the old model and can't re-export it.
|
||||
|
||||
## Using the Python package
|
||||
===========================
|
||||
|
||||
Pre-built binaries which can be used for performing inference with a trained model can be installed with `pip3`. You can then use the `deepspeech` binary to do speech-to-text on an audio file:
|
||||
|
||||
For the Python bindings, it is highly recommended that you perform the installation within a Python 3.5 or later virtual environment. You can find more information about those in `this documentation <http://docs.python-guide.org/en/latest/dev/virtualenvs/>`_.
|
||||
|
||||
We will continue under the assumption that you already have your system properly setup to create new virtual environments.
|
||||
|
||||
### Create a DeepSpeech virtual environment
|
||||
===========================================
|
||||
|
||||
In creating a virtual environment you will create a directory containing a `python3` binary and everything needed to run deepspeech. You can use whatever directory you want. For the purpose of the documentation, we will rely on `$HOME/tmp/deepspeech-venv`. You can create it using this command:
|
||||
|
||||
```
|
||||
|
||||
$ virtualenv -p python3 $HOME/tmp/deepspeech-venv/
|
||||
|
||||
```
|
||||
|
||||
Once this command completes successfully, the environment will be ready to be activated.
|
||||
|
||||
### Activating the environment
|
||||
==============================
|
||||
|
||||
Each time you need to work with DeepSpeech, you have to *activate* this virtual environment. This is done with this simple command:
|
||||
|
||||
```
|
||||
|
||||
$ source $HOME/tmp/deepspeech-venv/bin/activate
|
||||
|
||||
```
|
||||
|
||||
### Installing DeepSpeech Python bindings
|
||||
=========================================
|
||||
|
||||
Once your environment has been set-up and loaded, you can use `pip3` to manage packages locally. On a fresh setup of the `virtualenv`, you will have to install the DeepSpeech wheel. You can check if `deepspeech` is already installed with `pip3 list`.
|
||||
|
||||
To perform the installation, just use `pip3` as such:
|
||||
|
||||
```
|
||||
|
||||
$ pip3 install deepspeech
|
||||
|
||||
```
|
||||
|
||||
If `deepspeech` is already installed, you can update it as such:
|
||||
|
||||
```
|
||||
|
||||
$ pip3 install --upgrade deepspeech
|
||||
|
||||
```
|
||||
|
||||
Alternatively, if you have a supported NVIDIA GPU on Linux, you can install the GPU specific package as follows:
|
||||
|
||||
```
|
||||
|
||||
$ pip3 install deepspeech-gpu
|
||||
|
||||
```
|
||||
|
||||
See the `release notes](https://github.com/mozilla/DeepSpeech/releases) to find which GPUs are supported. Please ensure you have the required [CUDA dependency <#cuda-dependency>`* to find which GPUs are supported. Please ensure you have the required `CUDA dependency <#cuda-dependency>`*.
|
||||
|
||||
You can update `deepspeech-gpu` as follows:
|
||||
|
||||
```
|
||||
|
||||
$ pip3 install --upgrade deepspeech-gpu
|
||||
|
||||
```
|
||||
|
||||
In both cases, `pip3` should take care of installing all the required dependencies. After installation has finished, you should be able to call `deepspeech` from the command-line.
|
||||
|
||||
|
||||
Note: the following command assumes you `downloaded the pre-trained model <#getting-the-pre-trained-model>`_.
|
||||
|
||||
```bash
|
||||
|
||||
deepspeech --model models/output*graph.pbmm --alphabet models/alphabet.txt --lm models/lm.binary --trie models/trie --audio my*audio_file.wav
|
||||
|
||||
```
|
||||
|
||||
The arguments `--lm` and `--trie` are optional, and represent a language model.
|
||||
|
||||
See `client.py <native*client/python/client.py>`* for an example of how to use the package programatically.
|
||||
|
||||
## Using the Node.JS package
|
||||
============================
|
||||
|
||||
You can download the Node.JS bindings using `npm`:
|
||||
|
||||
```bash
|
||||
|
||||
npm install deepspeech
|
||||
|
||||
```
|
||||
|
||||
Please note that as of now, we only support Node.JS versions 4, 5 and 6. Once `SWIG has support <https://github.com/swig/swig/pull/968>`_ we can build for newer versions.
|
||||
|
||||
Alternatively, if you're using Linux and have a supported NVIDIA GPU, you can install the GPU specific package as follows:
|
||||
|
||||
```bash
|
||||
|
||||
npm install deepspeech-gpu
|
||||
|
||||
```
|
||||
|
||||
See the `release notes](https://github.com/mozilla/DeepSpeech/releases) to find which GPUs are supported. Please ensure you have the required [CUDA dependency <#cuda-dependency>`* to find which GPUs are supported. Please ensure you have the required `CUDA dependency <#cuda-dependency>`*.
|
||||
|
||||
See `client.js](native*client/javascript/client.js) for an example of how to use the bindings. Or download the [wav example <examples/nodejs*wav>`* for an example of how to use the bindings. Or download the `wav example <examples/nodejs*wav>`_.
|
||||
|
||||
|
||||
## Using the Command-Line client
|
||||
================================
|
||||
|
||||
To download the pre-built binaries for the `deepspeech` command-line (compiled C++) client, use `util/taskcluster.py`:
|
||||
|
||||
```bash
|
||||
|
||||
python3 util/taskcluster.py --target .
|
||||
|
||||
```
|
||||
|
||||
or if you're on macOS:
|
||||
|
||||
```bash
|
||||
|
||||
python3 util/taskcluster.py --arch osx --target .
|
||||
|
||||
```
|
||||
|
||||
also, if you need some binaries different than current master, like `v0.2.0-alpha.6`, you can use `--branch`:
|
||||
|
||||
```bash
|
||||
|
||||
python3 util/taskcluster.py --branch "v0.2.0-alpha.6" --target "."
|
||||
|
||||
```
|
||||
|
||||
The script `taskcluster.py` will download `native*client.tar.xz` (which includes the `deepspeech` binary and associated libraries) and extract it into the current folder. Also, `taskcluster.py` will download binaries for Linux/x86*64 by default, but you can override that behavior with the `--arch` parameter. See the help info with `python util/taskcluster.py -h` for more details. Specific branches of DeepSpeech or TensorFlow can be specified as well.
|
||||
|
||||
Note: the following command assumes you `downloaded the pre-trained model <#getting-the-pre-trained-model>`_.
|
||||
|
||||
```bash
|
||||
|
||||
./deepspeech --model models/output*graph.pbmm --alphabet models/alphabet.txt --lm models/lm.binary --trie models/trie --audio audio*input.wav
|
||||
|
||||
```
|
||||
|
||||
See the help output with `./deepspeech -h` and the `native client README <native*client/README.rst>`* for more details.
|
||||
|
||||
## Installing bindings from source
|
||||
==================================
|
||||
|
||||
If pre-built binaries aren't available for your system, you'll need to install them from scratch. Follow these ``native*client` installation instructions <native*client/README.rst>`_.
|
||||
|
||||
## Third party bindings
|
||||
=======================
|
||||
|
||||
In addition to the bindings above, third party developers have started to provide bindings to other languages:
|
||||
|
||||
* `Asticode](https://github.com/asticode) provides [Golang](https://golang.org) bindings in its [go-astideepspeech <https://github.com/asticode/go-astideepspeech>`_ provides `Golang](https://golang.org) bindings in its [go-astideepspeech <https://github.com/asticode/go-astideepspeech>`_ bindings in its `go-astideepspeech <https://github.com/asticode/go-astideepspeech>`_ repo.
|
||||
|
||||
* `RustAudio](https://github.com/RustAudio) provide a [Rust](https://www.rust-lang.org) binding, the installation and use of which is described in their [deepspeech-rs <https://github.com/RustAudio/deepspeech-rs>`_ provide a `Rust](https://www.rust-lang.org) binding, the installation and use of which is described in their [deepspeech-rs <https://github.com/RustAudio/deepspeech-rs>`_ binding, the installation and use of which is described in their `deepspeech-rs <https://github.com/RustAudio/deepspeech-rs>`_ repo.
|
||||
|
||||
* `stes](https://github.com/stes) provides preliminary [PKGBUILDs](https://wiki.archlinux.org/index.php/PKGBUILD) to install the client and python bindings on [Arch Linux](https://www.archlinux.org/) in the [arch-deepspeech <https://github.com/stes/arch-deepspeech>`_ provides preliminary `PKGBUILDs](https://wiki.archlinux.org/index.php/PKGBUILD) to install the client and python bindings on [Arch Linux](https://www.archlinux.org/) in the [arch-deepspeech <https://github.com/stes/arch-deepspeech>`_ to install the client and python bindings on `Arch Linux](https://www.archlinux.org/) in the [arch-deepspeech <https://github.com/stes/arch-deepspeech>`_ in the `arch-deepspeech <https://github.com/stes/arch-deepspeech>`_ repo.
|
||||
|
||||
* `gst-deepspeech](https://github.com/Elleo/gst-deepspeech) provides a [GStreamer <https://gstreamer.freedesktop.org/>`_ provides a `GStreamer <https://gstreamer.freedesktop.org/>`_ plugin which can be used from any language with GStreamer bindings.
|
||||
|
||||
# Training Your Own Model
|
||||
=========================
|
||||
|
||||
## Installing Training Prerequisites
|
||||
====================================
|
||||
|
||||
Install the required dependencies using `pip3`:
|
||||
|
||||
```bash
|
||||
|
||||
cd DeepSpeech
|
||||
|
||||
pip3 install -r requirements.txt
|
||||
|
||||
```
|
||||
|
||||
You'll also need to install the `ds*ctcdecoder` Python package. `ds*ctcdecoder` is required for decoding the outputs of the `deepspeech` acoustic model into text. You can use `util/taskcluster.py` with the `--decoder` flag to get a URL to a binary of the decoder package appropriate for your platform and Python version:
|
||||
|
||||
```bash
|
||||
|
||||
pip3 install $(python3 util/taskcluster.py --decoder)
|
||||
|
||||
```
|
||||
|
||||
This command will download and install the `ds*ctcdecoder` package. If you prefer building the binaries from source, see the `native*client README file <native*client/README.rst>`*. You can override the platform with `--arch` if you want the package for ARM7 (`--arch arm`) or ARM64 (`--arch arm64`).
|
||||
|
||||
## Recommendations
|
||||
==================
|
||||
|
||||
If you have a capable (NVIDIA, at least 8GB of VRAM) GPU, it is highly recommended to install TensorFlow with GPU support. Training will be significantly faster than using the CPU. To enable GPU support, you can do:
|
||||
|
||||
```bash
|
||||
|
||||
pip3 uninstall tensorflow
|
||||
|
||||
pip3 install 'tensorflow-gpu==1.13.1'
|
||||
|
||||
```
|
||||
|
||||
Please ensure you have the required `CUDA dependency <#cuda-dependency>`_.
|
||||
|
||||
It has been reported for some people failure at training:
|
||||
|
||||
```
|
||||
|
||||
tensorflow.python.framework.errors_impl.UnknownError: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
|
||||
|
||||
[[{{node tower\_0/conv1d/Conv2D}}]]
|
||||
|
||||
```
|
||||
|
||||
Setting the `TF*FORCE*GPU*ALLOW*GROWTH` environment variable to `true` seems to help in such cases.
|
||||
|
||||
## Common Voice training data
|
||||
=============================
|
||||
|
||||
The Common Voice corpus consists of voice samples that were donated through Mozilla's `Common Voice <https://voice.mozilla.org/>`_ Initiative.
|
||||
|
||||
You can download individual CommonVoice v2.0 language data sets from `here <https://voice.mozilla.org/data>`_.
|
||||
|
||||
After extraction of such a data set, you'll find the following contents:
|
||||
|
||||
- the `*.tsv` files output by CorporaCreator for the downloaded language
|
||||
|
||||
- the mp3 audio files they reference in a `clips` sub-directory.
|
||||
|
||||
For bringing this data into a form that DeepSpeech understands, you have to run the CommonVoice v2.0 importer (`bin/import_cv2.py`):
|
||||
|
||||
```bash
|
||||
|
||||
bin/import*cv2.py --filter*alphabet path/to/some/alphabet.txt /path/to/extracted/language/archive
|
||||
|
||||
```
|
||||
|
||||
Providing a filter alphabet is optional. It will exclude all samples whose transcripts contain characters not in the specified alphabet.
|
||||
|
||||
Running the importer with `-h` will show you some additional options.
|
||||
|
||||
Once the import is done, the `clips` sub-directory will contain for each required `.mp3` an additional `.wav` file.
|
||||
|
||||
It will also add the following `.csv` files:
|
||||
|
||||
- `clips/train.csv`
|
||||
- `clips/dev.csv`
|
||||
- `clips/test.csv`
|
||||
|
||||
All entries in these CSV files refer to their samples by absolute paths. So moving this sub-directory would require another import or tweaking the CSV files accordingly.
|
||||
|
||||
To use Common Voice data during training, validation and testing, you pass (comma separated combinations of) their filenames into `--train*files`, `--dev*files`, `--test_files` parameters of `DeepSpeech.py`.
|
||||
|
||||
If, for example, Common Voice language `en` was extracted to `../data/CV/en/`, `DeepSpeech.py` could be called like this:
|
||||
|
||||
```bash
|
||||
|
||||
./DeepSpeech.py --train*files ../data/CV/en/clips/train.csv --dev*files ../data/CV/en/clips/dev.csv --test_files ../data/CV/en/clips/test.csv
|
||||
|
||||
```
|
||||
|
||||
## Training a model
|
||||
===================
|
||||
|
||||
The central (Python) script is `DeepSpeech.py` in the project's root directory. For its list of command line options, you can call:
|
||||
|
||||
```bash
|
||||
|
||||
./DeepSpeech.py --helpfull
|
||||
|
||||
```
|
||||
|
||||
To get the output of this in a slightly better-formatted way, you can also look up the option definitions top `DeepSpeech.py`.
|
||||
|
||||
For executing pre-configured training scenarios, there is a collection of convenience scripts in the `bin` folder. Most of them are named after the corpora they are configured for. Keep in mind that the other speech corpora are *very large*, on the order of tens of gigabytes, and some aren't free. Downloading and preprocessing them can take a very long time, and training on them without a fast GPU (GTX 10 series recommended) takes even longer.
|
||||
|
||||
**If you experience GPU OOM errors while training, try reducing the batch size with the `--train*batch*size`, `--dev*batch*size` and `--test*batch*size` parameters.**
|
||||
|
||||
As a simple first example you can open a terminal, change to the directory of the DeepSpeech checkout and run:
|
||||
|
||||
```bash
|
||||
|
||||
./bin/run-ldc93s1.sh
|
||||
|
||||
```
|
||||
|
||||
This script will train on a small sample dataset called LDC93S1, which can be overfitted on a GPU in a few minutes for demonstration purposes. From here, you can alter any variables with regards to what dataset is used, how many training iterations are run and the default values of the network parameters.
|
||||
|
||||
Feel also free to pass additional (or overriding) `DeepSpeech.py` parameters to these scripts. Then, just run the script to train the modified network.
|
||||
|
||||
Each dataset has a corresponding importer script in `bin/` that can be used to download (if it's freely available) and preprocess the dataset. See `bin/import_librivox.py` for an example of how to import and preprocess a large dataset for training with DeepSpeech.
|
||||
|
||||
If you've run the old importers (in `util/importers/`), they could have removed source files that are needed for the new importers to run. In that case, simply remove the extracted folders and let the importer extract and process the dataset from scratch, and things should work.
|
||||
|
||||
## Checkpointing
|
||||
================
|
||||
|
||||
During training of a model so-called checkpoints will get stored on disk. This takes place at a configurable time interval. The purpose of checkpoints is to allow interruption (also in the case of some unexpected failure) and later continuation of training without losing hours of training time. Resuming from checkpoints happens automatically by just (re)starting training with the same `--checkpoint_dir` of the former run.
|
||||
|
||||
Be aware however that checkpoints are only valid for the same model geometry they had been generated from. In other words: If there are error messages of certain `Tensors` having incompatible dimensions, this is most likely due to an incompatible model change. One usual way out would be to wipe all checkpoint files in the checkpoint directory or changing it before starting the training.
|
||||
|
||||
## Exporting a model for inference
|
||||
==================================
|
||||
|
||||
If the `--export_dir` parameter is provided, a model will have been exported to this directory during training.
|
||||
|
||||
Refer to the corresponding `README.rst <native*client/README.rst>`* for information on building and running a client that can use the exported model.
|
||||
|
||||
## Exporting a model for TFLite
|
||||
===============================
|
||||
|
||||
If you want to experiment with the TF Lite engine, you need to export a model that is compatible with it, then use the `--export*tflite` flags. If you already have a trained model, you can re-export it for TFLite by running `DeepSpeech.py` again and specifying the same `checkpoint*dir` that you used for training, as well as passing `--export*tflite --export*dir /model/export/destination`.
|
||||
|
||||
## Making a mmap-able model for inference
|
||||
=========================================
|
||||
|
||||
The `output_graph.pb` model file generated in the above step will be loaded in memory to be dealt with when running inference.
|
||||
|
||||
This will result in extra loading time and memory consumption. One way to avoid this is to directly read data from the disk.
|
||||
|
||||
TensorFlow has tooling to achieve this: it requires building the target `//tensorflow/contrib/util:convert*graphdef*memmapped*format` (binaries are produced by our TaskCluster for some systems including Linux/amd64 and macOS/amd64), use `util/taskcluster.py` tool to download, specifying `tensorflow` as a source and `convert*graphdef*memmapped*format` as artifact.
|
||||
|
||||
Producing a mmap-able model is as simple as:
|
||||
|
||||
```
|
||||
|
||||
$ convert*graphdef*memmapped*format --in*graph=output*graph.pb --out*graph=output_graph.pbmm
|
||||
|
||||
```
|
||||
|
||||
Upon sucessfull run, it should report about conversion of a non-zero number of nodes. If it reports converting `0` nodes, something is wrong: make sure your model is a frozen one, and that you have not applied any incompatible changes (this includes `quantize_weights`).
|
||||
|
||||
## Continuing training from a release model
|
||||
===========================================
|
||||
|
||||
If you'd like to use one of the pre-trained models released by Mozilla to bootstrap your training process (transfer learning, fine tuning), you can do so by using the `--checkpoint_dir` flag in `DeepSpeech.py`. Specify the path where you downloaded the checkpoint from the release, and training will resume from the pre-trained model.
|
||||
|
||||
For example, if you want to fine tune the entire graph using your own data in `my-train.csv`, `my-dev.csv` and `my-test.csv`, for three epochs, you can something like the following, tuning the hyperparameters as needed:
|
||||
|
||||
```bash
|
||||
|
||||
mkdir fine*tuning*checkpoints
|
||||
|
||||
python3 DeepSpeech.py --n*hidden 2048 --checkpoint*dir path/to/checkpoint/folder --epochs 3 --train*files my-train.csv --dev*files my-dev.csv --test*files my*dev.csv --learning_rate 0.0001
|
||||
|
||||
```
|
||||
|
||||
Note: the released models were trained with `--n_hidden 2048`, so you need to use that same value when initializing from the release models.
|
||||
|
||||
# Contribution guidelines
|
||||
=========================
|
||||
|
||||
This repository is governed by Mozilla's code of conduct and etiquette guidelines. For more details, please read the `Mozilla Community Participation Guidelines <https://www.mozilla.org/about/governance/policies/participation/>`_.
|
||||
|
||||
Before making a Pull Request, check your changes for basic mistakes and style problems by using a linter. We have cardboardlinter setup in this repository, so for example, if you've made some changes and would like to run the linter on just the changed code, you can use the follow command:
|
||||
|
||||
```bash
|
||||
|
||||
pip install pylint cardboardlint
|
||||
|
||||
cardboardlinter --refspec master
|
||||
|
||||
```
|
||||
|
||||
This will compare the code against master and run the linter on all the changes. We plan to introduce more linter checks (e.g. for C++) in the future. To run it automatically as a git pre-commit hook, do the following:
|
||||
|
||||
```bash
|
||||
|
||||
cat <<\EOF > .git/hooks/pre-commit
|
||||
!/bin/bash
|
||||
==========
|
||||
|
||||
if [ ! -x "$(command -v cardboardlinter)" ]; then
|
||||
|
||||
exit 0
|
||||
|
||||
fi
|
||||
|
||||
First, stash index and work dir, keeping only the
|
||||
=================================================
|
||||
to-be-committed changes in the working directory.
|
||||
=================================================
|
||||
|
||||
echo "Stashing working tree changes..." 1>&2
|
||||
|
||||
old_stash=$(git rev-parse -q --verify refs/stash)
|
||||
|
||||
git stash save -q --keep-index
|
||||
|
||||
new_stash=$(git rev-parse -q --verify refs/stash)
|
||||
|
||||
If there were no changes (e.g., `--amend` or `--allow-empty`)
|
||||
=============================================================
|
||||
then nothing was stashed, and we should skip everything,
|
||||
========================================================
|
||||
including the tests themselves. (Presumably the tests passed
|
||||
=============================================================
|
||||
on the previous commit, so there is no need to re-run them.)
|
||||
============================================================
|
||||
|
||||
if [ "$old*stash" = "$new*stash" ]; then
|
||||
|
||||
echo "No changes, skipping lint." 1>&2
|
||||
|
||||
exit 0
|
||||
|
||||
fi
|
||||
|
||||
Run tests
|
||||
=========
|
||||
|
||||
cardboardlinter --refspec HEAD -n auto
|
||||
|
||||
status=$?
|
||||
|
||||
Restore changes
|
||||
===============
|
||||
|
||||
echo "Restoring working tree changes..." 1>&2
|
||||
|
||||
git reset --hard -q && git stash apply --index -q && git stash drop -q
|
||||
|
||||
Exit with status from test-run: nonzero prevents commit
|
||||
=======================================================
|
||||
|
||||
exit $status
|
||||
|
||||
EOF
|
||||
|
||||
chmod +x .git/hooks/pre-commit
|
||||
|
||||
```
|
||||
|
||||
This will run the linters on just the changes made in your commit.
|
||||
|
||||
# Contact/Getting Help
|
||||
======================
|
||||
|
||||
There are several ways to contact us or to get help:
|
||||
|
||||
1. `**FAQ**](https://github.com/mozilla/DeepSpeech/wiki#frequently-asked-questions) - We have a list of common questions, and their answers, in our [FAQ](https://github.com/mozilla/DeepSpeech/wiki#frequently-asked-questions). When just getting started, it's best to first check the [FAQ <https://github.com/mozilla/DeepSpeech/wiki#frequently-asked-questions>`_ - We have a list of common questions, and their answers, in our `FAQ](https://github.com/mozilla/DeepSpeech/wiki#frequently-asked-questions). When just getting started, it's best to first check the [FAQ <https://github.com/mozilla/DeepSpeech/wiki#frequently-asked-questions>`_. When just getting started, it's best to first check the `FAQ <https://github.com/mozilla/DeepSpeech/wiki#frequently-asked-questions>`_ to see if your question is addressed.
|
||||
|
||||
2. `**Discourse Forums**](https://discourse.mozilla.org/c/deep-speech) - If your question is not addressed in the [FAQ](https://github.com/mozilla/DeepSpeech/wiki#frequently-asked-questions), the [Discourse Forums](https://discourse.mozilla.org/c/deep-speech) is the next place to look. They contain conversations on [General Topics](https://discourse.mozilla.org/t/general-topics/21075), [Using Deep Speech](https://discourse.mozilla.org/t/using-deep-speech/21076/4), and [Deep Speech Development <https://discourse.mozilla.org/t/deep-speech-development/21077>`_ - If your question is not addressed in the `FAQ](https://github.com/mozilla/DeepSpeech/wiki#frequently-asked-questions), the [Discourse Forums](https://discourse.mozilla.org/c/deep-speech) is the next place to look. They contain conversations on [General Topics](https://discourse.mozilla.org/t/general-topics/21075), [Using Deep Speech](https://discourse.mozilla.org/t/using-deep-speech/21076/4), and [Deep Speech Development <https://discourse.mozilla.org/t/deep-speech-development/21077>`_, the `Discourse Forums](https://discourse.mozilla.org/c/deep-speech) is the next place to look. They contain conversations on [General Topics](https://discourse.mozilla.org/t/general-topics/21075), [Using Deep Speech](https://discourse.mozilla.org/t/using-deep-speech/21076/4), and [Deep Speech Development <https://discourse.mozilla.org/t/deep-speech-development/21077>`_ is the next place to look. They contain conversations on `General Topics](https://discourse.mozilla.org/t/general-topics/21075), [Using Deep Speech](https://discourse.mozilla.org/t/using-deep-speech/21076/4), and [Deep Speech Development <https://discourse.mozilla.org/t/deep-speech-development/21077>`_, `Using Deep Speech](https://discourse.mozilla.org/t/using-deep-speech/21076/4), and [Deep Speech Development <https://discourse.mozilla.org/t/deep-speech-development/21077>`_, and `Deep Speech Development <https://discourse.mozilla.org/t/deep-speech-development/21077>`_.
|
||||
|
||||
3. `**IRC**](https://wiki.mozilla.org/IRC) - If your question is not addressed by either the [FAQ](https://github.com/mozilla/DeepSpeech/wiki#frequently-asked-questions) or [Discourse Forums](https://discourse.mozilla.org/c/deep-speech), you can contact us on the `#machinelearning` channel on [Mozilla IRC <https://wiki.mozilla.org/IRC>`_ - If your question is not addressed by either the `FAQ](https://github.com/mozilla/DeepSpeech/wiki#frequently-asked-questions) or [Discourse Forums](https://discourse.mozilla.org/c/deep-speech), you can contact us on the `#machinelearning` channel on [Mozilla IRC <https://wiki.mozilla.org/IRC>`_ or `Discourse Forums](https://discourse.mozilla.org/c/deep-speech), you can contact us on the `#machinelearning` channel on [Mozilla IRC <https://wiki.mozilla.org/IRC>`_, you can contact us on the `#machinelearning` channel on `Mozilla IRC <https://wiki.mozilla.org/IRC>`_; people there can try to answer/help
|
||||
|
||||
4. `**Issues** <https://github.com/mozilla/deepspeech/issues>`_ - Finally, if all else fails, you can open an issue in our repo.
|
||||
|
|
@ -10,7 +10,7 @@
|
|||
"license": "MPL-2.0",
|
||||
"homepage": "https://github.com/mozilla/DeepSpeech/tree/v$(PROJECT_VERSION)#project-deepspeech",
|
||||
"files": [
|
||||
"README.md",
|
||||
"README.rst",
|
||||
"client.js",
|
||||
"index.js",
|
||||
"lib/*"
|
||||
|
|
|
@ -68,8 +68,8 @@ def main():
|
|||
|
||||
setup(name=project_name,
|
||||
description='A library for running inference on a DeepSpeech model',
|
||||
long_description=read('../../README.md'),
|
||||
long_description_content_type='text/markdown; charset=UTF-8',
|
||||
long_description=read('../../README.rst'),
|
||||
long_description_content_type='text/x-rst; charset=UTF-8',
|
||||
author='Mozilla',
|
||||
version=project_version,
|
||||
package_dir={'deepspeech': '.'},
|
||||
|
|
|
@ -1,5 +1,7 @@
|
|||
# Taskcluster
|
||||
|
||||
Taskcluster
|
||||
===========
|
||||
|
||||
This directory contains files associated with Taskcluster -- a task execution framework for Mozilla's Continuous Integration system.
|
||||
|
||||
Please consult the [existing Taskcluster documentation](https://docs.taskcluster.net/docs).
|
||||
Please consult the `existing Taskcluster documentation <https://docs.taskcluster.net/docs>`_.
|
|
@ -3,3 +3,4 @@ semver==2.8.1
|
|||
sphinx==2.2.0
|
||||
sphinx-js==2.8
|
||||
sphinx-rtd-theme==0.4.3
|
||||
pygments==2.4.2
|
||||
|
|
Loading…
Reference in New Issue