diff --git a/tensorflow/lite/g3doc/guide/inference.md b/tensorflow/lite/g3doc/guide/inference.md index 24ff3fdcd9a..1d57c055978 100644 --- a/tensorflow/lite/g3doc/guide/inference.md +++ b/tensorflow/lite/g3doc/guide/inference.md @@ -50,15 +50,15 @@ On Android, TensorFlow Lite inference can be performed using either Java or C++ APIs. The Java APIs provide convenience and can be used directly within your Android Activity classes. The C++ APIs on the other hand may offer more flexibility and speed, but may require writing JNI wrappers to move data between -Java and C++ layers. You can find an example [here](./demo_android.md) +Java and C++ layers. You can find an example [here](./android.md). #### iOS TensorFlow Lite provides Swift/Objective C++ APIs for inference on iOS. An -example can be found [here](./demo_ios.md) +example can be found [here](./ios.md). #### Linux -On Linux platforms such as [Raspberry Pi](./rpi.md), TensorFlow Lite C++ and -Python APIs can be used to run inference. +On Linux platforms such as [Raspberry Pi](./build_rpi.md), TensorFlow Lite C++ +and Python APIs can be used to run inference. ## API Guides @@ -68,9 +68,10 @@ experimental bindings for several other languages (C, Swift, Objective-C). In most cases, the API design reflects a preference for performance over ease of use. TensorFlow Lite is designed for fast inference on small devices so it should be no surprise that the APIs try to avoid unnecessary copies at the -expense of convenience. Similarly, consistency with TensorFlow APIs was not an explicit goal and some variance is to be expected. +expense of convenience. Similarly, consistency with TensorFlow APIs was not an +explicit goal and some variance is to be expected. -There is also a [Python API for TensorFlow Lite](./convert/python_api.md). +There is also a [Python API for TensorFlow Lite](./../convert/python_api.md). ### Loading a Model @@ -202,9 +203,10 @@ interpreter.runForMultipleInputsOutputs(inputs, map_of_indices_to_outputs); where each entry in `inputs` corresponds to an input tensor and `map_of_indices_to_outputs` maps indices of output tensors to the corresponding output data. In both cases the tensor indices should correspond to the values -given to the [TensorFlow Lite Optimized Converter](convert/cmdline_examples.md) -when the model was created. Be aware that the order of tensors in `input` must -match the order given to the `TensorFlow Lite Optimized Converter`. +given to the +[TensorFlow Lite Optimized Converter](./../convert/cmdline_examples.md) when the +model was created. Be aware that the order of tensors in `input` must match the +order given to the `TensorFlow Lite Optimized Converter`. The Java API also provides convenient functions for app developers to get the index of any model input or output using a tensor name: