Merge pull request #2762 from reuben/remove-generate-trie-docs

Remove references to generate_trie from docs (Fixes #2761)
This commit is contained in:
Reuben Morais 2020-02-16 18:09:55 +01:00 committed by GitHub
commit f27457cbbe
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
2 changed files with 6 additions and 6 deletions

View File

@ -156,7 +156,7 @@ also, if you need some binaries different than current master, like ``v0.2.0-alp
python3 util/taskcluster.py --branch "v0.2.0-alpha.6" --target "."
The script ``taskcluster.py`` will download ``native_client.tar.xz`` (which includes the ``deepspeech`` binary, ``generate_trie`` and associated libraries) and extract it into the current folder. Also, ``taskcluster.py`` will download binaries for Linux/x86_64 by default, but you can override that behavior with the ``--arch`` parameter. See the help info with ``python util/taskcluster.py -h`` for more details. Specific branches of DeepSpeech or TensorFlow can be specified as well.
The script ``taskcluster.py`` will download ``native_client.tar.xz`` (which includes the ``deepspeech`` binary and associated libraries) and extract it into the current folder. Also, ``taskcluster.py`` will download binaries for Linux/x86_64 by default, but you can override that behavior with the ``--arch`` parameter. See the help info with ``python util/taskcluster.py -h`` for more details. Specific branches of DeepSpeech or TensorFlow can be specified as well.
Note: the following command assumes you `downloaded the pre-trained model <#getting-the-pre-trained-model>`_.

View File

@ -52,7 +52,7 @@ After you have installed the correct version of Bazel, configure TensorFlow:
Compile DeepSpeech
------------------
Compile ``libdeepspeech.so`` & ``generate_trie``
Compile ``libdeepspeech.so``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Within your TensorFlow checkout, create a symbolic link to the DeepSpeech ``native_client`` directory. Assuming DeepSpeech and TensorFlow checkouts are in the same directory, do:
@ -62,11 +62,11 @@ Within your TensorFlow checkout, create a symbolic link to the DeepSpeech ``nati
cd tensorflow
ln -s ../DeepSpeech/native_client ./
You can now use Bazel to build the main DeepSpeech library, ``libdeepspeech.so``\ , as well as the ``generate_trie`` binary. Add ``--config=cuda`` if you want a CUDA build.
You can now use Bazel to build the main DeepSpeech library, ``libdeepspeech.so``\ . Add ``--config=cuda`` if you want a CUDA build.
.. code-block::
bazel build --workspace_status_command="bash native_client/bazel_workspace_status_cmd.sh" --config=monolithic -c opt --copt=-O3 --copt="-D_GLIBCXX_USE_CXX11_ABI=0" --copt=-fvisibility=hidden //native_client:libdeepspeech.so //native_client:generate_trie
bazel build --workspace_status_command="bash native_client/bazel_workspace_status_cmd.sh" --config=monolithic -c opt --copt=-O3 --copt="-D_GLIBCXX_USE_CXX11_ABI=0" --copt=-fvisibility=hidden //native_client:libdeepspeech.so
The generated binaries will be saved to ``bazel-bin/native_client/``.
@ -149,13 +149,13 @@ So your command line for ``RPi3`` and ``ARMv7`` should look like:
.. code-block::
bazel build --workspace_status_command="bash native_client/bazel_workspace_status_cmd.sh" --config=monolithic --config=rpi3 --config=rpi3_opt -c opt --copt=-O3 --copt=-fvisibility=hidden //native_client:libdeepspeech.so //native_client:generate_trie
bazel build --workspace_status_command="bash native_client/bazel_workspace_status_cmd.sh" --config=monolithic --config=rpi3 --config=rpi3_opt -c opt --copt=-O3 --copt=-fvisibility=hidden //native_client:libdeepspeech.so
And your command line for ``LePotato`` and ``ARM64`` should look like:
.. code-block::
bazel build --workspace_status_command="bash native_client/bazel_workspace_status_cmd.sh" --config=monolithic --config=rpi3-armv8 --config=rpi3-armv8_opt -c opt --copt=-O3 --copt=-fvisibility=hidden //native_client:libdeepspeech.so //native_client:generate_trie
bazel build --workspace_status_command="bash native_client/bazel_workspace_status_cmd.sh" --config=monolithic --config=rpi3-armv8 --config=rpi3-armv8_opt -c opt --copt=-O3 --copt=-fvisibility=hidden //native_client:libdeepspeech.so
While we test only on RPi3 Raspbian Buster and LePotato ARMBian Buster, anything compatible with ``armv7-a cortex-a53`` or ``armv8-a cortex-a53`` should be fine.