- Add post_training_quantization to index under how it works

- Add post_training_quantization to devguide as an optional step
- Add GPU dev preview to index as an update
- Remove testimonials

PiperOrigin-RevId: 229823165
This commit is contained in:
A. Unique TensorFlower 2019-01-17 14:54:37 -08:00 committed by TensorFlower Gardener
parent b797012b39
commit 92122d1818
3 changed files with 94 additions and 73 deletions

View File

@ -4,7 +4,7 @@ description: <!--no description-->
landing_page: landing_page:
custom_css_path: /site-assets/css/style.css custom_css_path: /site-assets/css/style.css
rows: rows:
- heading: TensorFlow Lite is for mobile and embedded devices. - heading: TensorFlow Lite is for mobile and embedded devices
description: > description: >
<p style="max-width: 75%;"> <p style="max-width: 75%;">
TensorFlow Lite is the official solution for running machine learning TensorFlow Lite is the official solution for running machine learning
@ -13,9 +13,6 @@ landing_page:
iOS, and other operating systems. iOS, and other operating systems.
</p> </p>
<style> <style>
.tfo-landing-row-heading {
padding-top: 0 !important;
}
.tfo-landing-row-heading h2 { .tfo-landing-row-heading h2 {
margin-top: 0 !important; margin-top: 0 !important;
} }
@ -71,58 +68,16 @@ landing_page:
icon_name: lens icon_name: lens
foreground: theme foreground: theme
- classname: devsite-landing-row-logos tfo-landing-row-heading
heading: Companies using TensorFlow Lite
items:
- custom_image:
path: ./images/landing-page/photos_logo.png
path: https://www.photos.google.com
- custom_image:
path: ./images/landing-page/gboard_logo.png
path: https://play.google.com/store/apps/details?id=com.google.android.inputmethod.latin&hl=en_US
- custom_image:
path: ./images/landing-page/gmail_logo.png
path: https://www.google.com/gmail/
- custom_image:
path: ./images/landing-page/assistant_logo.png
path: https://assistant.google.com/
- classname: devsite-landing-row-logos
items:
- custom_image:
path: ./images/landing-page/vsco_logo.png
path: https://vsco.co
- custom_image:
path: ./images/landing-page/shazam_logo.png
path: https://www.shazam.com/
- custom_image:
path: ./images/landing-page/nest_logo.png
path: https://nest.com/
- custom_image:
path: ./images/landing-page/loseit_logo.png
path: https://www.loseit.com/
- classname: devsite-landing-row-no-image-background devsite-landing-row-67
background: grey
items:
- description: >
<em>“TensorFlow Lite helped us introduce machine learning and AI into our
app in an easy and streamlined way. We could reduce the size of our
models while keeping the accuracy high. This helped us create an amazing
fishing experience for our users by allowing them to identify any fish
species with just a photo.”</em>
image_path: ./images/landing-page/fishbrain_logo_big.png
- heading: How it works - heading: How it works
items: items:
- heading: Build - heading: Pick a model
icon: icon:
icon_name: build icon_name: build
description: > description: >
Build a new model or retrain an existing one, such as using transfer learning. Pick a new model or retrain an existing one.
buttons: buttons:
- label: Read the developer guide - label: Pick
path: /lite/devguide path: /lite/devguide#1_choose_a_model
classname: button button-primary tfo-button-primary classname: button button-primary tfo-button-primary
- heading: Convert - heading: Convert
icon: icon:
@ -131,18 +86,29 @@ landing_page:
Convert a TensorFlow model into a compressed flat buffer with the Convert a TensorFlow model into a compressed flat buffer with the
TensorFlow Lite Converter. TensorFlow Lite Converter.
buttons: buttons:
- label: Read the converter guide - label: Convert
path: /lite/convert/ path: /lite/devguide#2_convert_the_model_format
classname: button button-primary tfo-button-primary classname: button button-primary tfo-button-primary
- heading: Deploy - heading: Deploy
icon:
icon_name: settings_cell
description: >
Take the compressed <code>.tflite</code> file and load it into a mobile or embedded device.
buttons:
- label: Deploy
path: /lite/devguide#3_use_the_tensorflow_lite_model_for_inference_in_a_mobile_app
classname: button button-primary tfo-button-primary
- heading: Optimize
icon: icon:
icon_name: bolt icon_name: bolt
description: > description: >
Take the compressed <code>.tflite</code> file and load it into a mobile [optional] Quantize by converting 32-bit floats to more efficient 8-bit integers or run on GPU.
or embedded device.<br/> buttons:
See the <a href="#build-your-first-tensorflow-lite-app">tutorials below</a> to build an app. - label: Optimize
path: /lite/devguide#4_optimize_your_model_optional
classname: button button-primary tfo-button-primary
- heading: Build your first TensorFlow Lite app - heading: Build your first TensorFlow Lite app with Codelabs
background: grey background: grey
items: items:
- classname: tfo-landing-row-item-inset-white - classname: tfo-landing-row-item-inset-white
@ -160,28 +126,40 @@ landing_page:
We love to hear what you're working on—it may even get highlighted on We love to hear what you're working on—it may even get highlighted on
our social media! <a href="https://groups.google.com/a/tensorflow.org/forum/#!forum/discuss" class="external">Tell us</a>. our social media! <a href="https://groups.google.com/a/tensorflow.org/forum/#!forum/discuss" class="external">Tell us</a>.
- classname: devsite-landing-row-no-image-background devsite-landing-row-67 - classname: devsite-landing-row-logos tfo-landing-row-heading
heading: TensorFlow Lite users
items: items:
- description: > - custom_image:
<p> path: ./images/landing-page/photos_logo.png
<em>“The release of TensorFlow Lite has allowed us to deploy an engaging - custom_image:
real-time experience to our users that eliminates the requirement path: ./images/landing-page/gboard_logo.png
for a data connection. TensorFlow Lites ability to compress and - custom_image:
optimize the TensorFlow graph for mobile deployment has been path: ./images/landing-page/gmail_logo.png
transformative in expanding the capabilities of Snap It.</em> - custom_image:
</p> path: ./images/landing-page/assistant_logo.png
<p>
<em>Through TensorFlow Lite, our users can now enjoy a state of the - classname: devsite-landing-row-logos
art, computer-vision-based food logging experience without worrying items:
about signal strength. We look forward to future collaborations - custom_image:
with the TensorFlow Lite team.”</em> path: ./images/landing-page/vsco_logo.png
</p> - custom_image:
image_path: ./images/landing-page/loseit_logo_big.png path: ./images/landing-page/shazam_logo.png
- custom_image:
path: ./images/landing-page/nest_logo.png
- custom_image:
path: ./images/landing-page/loseit_logo.png
- classname: devsite-landing-row-cards - classname: devsite-landing-row-cards
background: grey background: grey
heading: Updates heading: Updates
items: items:
- heading: "TensorFlow Lite Now Faster with Mobile GPUs (Developer Preview)"
image_path: ./images/landing-page/facial_contour_detection.png
path: https://medium.com/tensorflow/tensorflow-lite-now-faster-with-mobile-gpus-developer-preview-e15797e6dee7
buttons:
- label: Read more
path: https://medium.com/tensorflow/tensorflow-lite-now-faster-with-mobile-gpus-developer-preview-e15797e6dee7
- heading: "AI in motion: react in the real world" - heading: "AI in motion: react in the real world"
image_path: ./images/landing-page/ai_in_motion.png image_path: ./images/landing-page/ai_in_motion.png
path: https://cloud.google.com/blog/products/ai-machine-learning/ai-motion-designing-simple-system-see-understand-and-react-real-world-part-ii path: https://cloud.google.com/blog/products/ai-machine-learning/ai-motion-designing-simple-system-see-understand-and-react-real-world-part-ii

View File

@ -180,7 +180,6 @@ bazel run tensorflow/lite/tools:visualize -- model.tflite model_viz.html
This generates an interactive HTML page listing subgraphs, operations, and a This generates an interactive HTML page listing subgraphs, operations, and a
graph visualization. graph visualization.
## 3. Use the TensorFlow Lite model for inference in a mobile app ## 3. Use the TensorFlow Lite model for inference in a mobile app
After completing the prior steps, you should now have a `.tflite` model file. After completing the prior steps, you should now have a `.tflite` model file.
@ -221,3 +220,47 @@ devices. To use the converter, refer to the
Compile Tensorflow Lite for a Raspberry Pi by following the Compile Tensorflow Lite for a Raspberry Pi by following the
[RPi build instructions](rpi.md) This compiles a static library file (`.a`) used [RPi build instructions](rpi.md) This compiles a static library file (`.a`) used
to build your app. There are plans for Python bindings and a demo app. to build your app. There are plans for Python bindings and a demo app.
## 4. Optimize your model (optional)
There are two options. If you plan to run on CPU, we recommend that you quantize
your weights and activation tensors. If the hardware is available, another
option is to run on GPU for massively parallelizable workloads.
### Quantization
Compress your model size by lowering the precision of the parameters (i.e.
neural network weights) from their training-time 32-bit floating-point
representations into much smaller and efficient 8-bit integer ones.
This will execute the heaviest computations fast in lower precision, but the
most sensitive ones with higher precision, thus typically resulting in little to
no final accuracy losses for the task, yet a significant speed-up over pure
floating-point execution.
The post-training quantization technique is integrated into the TensorFlow Lite
conversion tool. Getting started is easy: after building your TensorFlow model,
simply enable the post_training_quantize flag in the TensorFlow Lite
conversion tool. Assuming that the saved model is stored in saved_model_dir, the
quantized tflite flatbuffer can be generated in command line:
```
converter=tf.contrib.lite.TocoConverter.from_saved_model(saved_model_dir)
converter.post_training_quantize=True
tflite_quantized_model=converter.convert()
open(“quantized_model.tflite”, “wb”).write(tflite_quantized_model)
```
Read the full documentation [here](performance/post_training_quantization) and see a tutorial [here](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/tutorials/post_training_quant.ipynb).
### GPU
Run on GPU GPUs are designed to have high throughput for massively
parallelizable workloads. Thus, they are well-suited for deep neural nets, which
consist of a huge number of operators, each working on some input tensor(s) that
can be easily divided into smaller workloads and carried out in parallel,
typically resulting in lower latency.
Another benefit with GPU inference is its power efficiency. GPUs carry out the
computations in a very efficient and optimized manner, so that they consume less
power and generate less heat than when the same task is run on CPUs.
Read the tutorial [here](performance/gpu) and full documentation [here](performance/gpu_advanced).

Binary file not shown.

After

Width:  |  Height:  |  Size: 288 KiB