diff --git a/README.md b/README.md index 9ed85feb..e10dd094 100644 --- a/README.md +++ b/README.md @@ -327,7 +327,7 @@ If you want to experiment with the TF Lite engine, you need to export a model th The `output_graph.pb` model file generated in the above step will be loaded in memory to be dealt with when running inference. This will result in extra loading time and memory consumption. One way to avoid this is to directly read data from the disk. -TensorFlow has tooling to achieve this: it requires building the target `//tensorflow/contrib/util:convert_graphdef_memmapped_format` (binaries are produced by our TaskCluster for some systems including Linux/amd64 and macOS/amd64), use `util/taskcluster.py` tool to download, specifying `tensorflow` as a source. +TensorFlow has tooling to achieve this: it requires building the target `//tensorflow/contrib/util:convert_graphdef_memmapped_format` (binaries are produced by our TaskCluster for some systems including Linux/amd64 and macOS/amd64), use `util/taskcluster.py` tool to download, specifying `tensorflow` as a source and `convert_graphdef_memmapped_format` as artifact. Producing a mmap-able model is as simple as: ``` $ convert_graphdef_memmapped_format --in_graph=output_graph.pb --out_graph=output_graph.pbmm