From 09ebb22af39b1d6f4021c6141c80734bb6815d80 Mon Sep 17 00:00:00 2001 From: Pulkit Bhuwalka Date: Mon, 4 Mar 2019 12:08:20 -0800 Subject: [PATCH] Fix architecture link. Page moved. PiperOrigin-RevId: 236699510 --- tensorflow/lite/g3doc/guide/inference.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/tensorflow/lite/g3doc/guide/inference.md b/tensorflow/lite/g3doc/guide/inference.md index ff21778fc99..24ff3fdcd9a 100644 --- a/tensorflow/lite/g3doc/guide/inference.md +++ b/tensorflow/lite/g3doc/guide/inference.md @@ -7,7 +7,7 @@ TensorFlow Lite inference is the process of executing a TensorFlow Lite model on-device and extracting meaningful results from it. Inference is the final step in using the model on-device in the -[architecture](./overview.md#tensorflow-lite-architecture). +[architecture](./index.md#tensorflow_lite_architecture). Inference for TensorFlow Lite models is run through an interpreter. This document outlines the various APIs for the interpreter along with the