Fix architecture link. Page moved.

PiperOrigin-RevId: 236699510
This commit is contained in:
Pulkit Bhuwalka 2019-03-04 12:08:20 -08:00 committed by TensorFlower Gardener
parent 12606ff846
commit 09ebb22af3

View File

@ -7,7 +7,7 @@
TensorFlow Lite inference is the process of executing a TensorFlow Lite
model on-device and extracting meaningful results from it. Inference is the
final step in using the model on-device in the
[architecture](./overview.md#tensorflow-lite-architecture).
[architecture](./index.md#tensorflow_lite_architecture).
Inference for TensorFlow Lite models is run through an interpreter. This
document outlines the various APIs for the interpreter along with the