Fix architecture link. Page moved.
PiperOrigin-RevId: 236699510
This commit is contained in:
parent
12606ff846
commit
09ebb22af3
@ -7,7 +7,7 @@
|
||||
TensorFlow Lite inference is the process of executing a TensorFlow Lite
|
||||
model on-device and extracting meaningful results from it. Inference is the
|
||||
final step in using the model on-device in the
|
||||
[architecture](./overview.md#tensorflow-lite-architecture).
|
||||
[architecture](./index.md#tensorflow_lite_architecture).
|
||||
|
||||
Inference for TensorFlow Lite models is run through an interpreter. This
|
||||
document outlines the various APIs for the interpreter along with the
|
||||
|
Loading…
Reference in New Issue
Block a user