Remove dead link to "quantization".

What it pointed to previously (TFMOT post-training docs) didn't provide additional useful information beyond this paragraph itself. For more on "what quantization is", the available information is available as people need it (when they use the different forms of quantization tools)

PiperOrigin-RevId: 313424121
Change-Id: Idd1014d9fcdd3ea415ee07f3630d52a96f714f39
This commit is contained in:
Alan Chiao 2020-05-27 11:06:25 -07:00 committed by TensorFlower Gardener
parent 076bbc5edf
commit 14da8c0f32

View File

@ -79,10 +79,9 @@ with TensorFlow Lite.
### Quantization
[Quantization](https://www.tensorflow.org/model_optimization/guide/quantization)
works by reducing the precision of the numbers used to represent a model's
parameters, which by default are 32-bit floating point numbers. This results in
a smaller model size and faster computation.
Quantization works by reducing the precision of the numbers used to represent a
model's parameters, which by default are 32-bit floating point numbers. This
results in a smaller model size and faster computation.
The following types of quantization are available in TensorFlow Lite: