|
||
---|---|---|
.. | ||
coco_object_detection | ||
imagenet_image_classification | ||
inference_diff | ||
BUILD | ||
build_def.bzl | ||
README.md | ||
task_executor_main.cc | ||
task_executor.cc | ||
task_executor.h |
TFLite Model Task Evaluation
This page describes how you can check the accuracy of quantized models to verify that any degradation in accuracy is within acceptable limits.
Accuracy & correctness
TensorFlow Lite has two types of tooling to measure how accurately a delegate behaves for a given model: Task-Based and Task-Agnostic.
Task-Based Evaluation TFLite has two tools to evaluate correctness on two image-based tasks: - ILSVRC 2012 (Image Classification) with top-K accuracy - COCO Object Detection (w/ bounding boxes) with mean Average Precision (mAP)
Task-Agnostic Evaluation For tasks where there isn't an established on-device evaluation tool, or if you are experimenting with custom models, TensorFlow Lite has the Inference Diff tool.
Tools
There are three different binaries which are supported. A brief description of each is provided below.
Inference Diff Tool
This binary compares TensorFlow Lite execution in single-threaded CPU inference and user-defined inference.
Image Classification Evaluation
This binary evaluates TensorFlow Lite models trained for the ILSVRC 2012 image classification task.
Object Detection Evaluation
This binary evaluates TensorFlow Lite models trained for the bounding box-based COCO Object Detection task.
For more information visit the TensorFlow Lite guide on Accuracy & correctness page.