STT-tensorflow/tensorflow/lite/tools/evaluation/tasks
Haoyu Zhang 2e9f326d06 Internal changes on cleanup.
PiperOrigin-RevId: 343332947
Change-Id: Ib6c6d9b1fae5092d4b7bdbada105b498f814e91b
2020-11-19 11:35:13 -08:00
..
coco_object_detection Internal changes on cleanup. 2020-11-19 11:35:13 -08:00
imagenet_image_classification Internal changes on cleanup. 2020-11-19 11:35:13 -08:00
inference_diff Internal changes on cleanup. 2020-11-19 11:35:13 -08:00
BUILD Support to output unconsumed flags and exit the execution if cmdline flags fail to be parsed for tflite evaluation tools. 2020-06-23 20:08:04 -07:00
build_def.bzl Refactor the inference_diff and imagenet-accuracy evaluation task to become a utility library, and create a common main stub function for these evaluation tasks. 2020-04-19 20:44:28 -07:00
README.md Merge pull request #43675 from Harsh188:Adding_README.md_evaluation/tasks 2020-11-05 20:05:15 -08:00
task_executor_main.cc Support to output unconsumed flags and exit the execution if cmdline flags fail to be parsed for tflite evaluation tools. 2020-06-23 20:08:04 -07:00
task_executor.cc Support to output unconsumed flags and exit the execution if cmdline flags fail to be parsed for tflite evaluation tools. 2020-06-23 20:08:04 -07:00
task_executor.h Support to output unconsumed flags and exit the execution if cmdline flags fail to be parsed for tflite evaluation tools. 2020-06-23 20:08:04 -07:00

TFLite Model Task Evaluation

This page describes how you can check the accuracy of quantized models to verify that any degradation in accuracy is within acceptable limits.

Accuracy & correctness

TensorFlow Lite has two types of tooling to measure how accurately a delegate behaves for a given model: Task-Based and Task-Agnostic.

Task-Based Evaluation TFLite has two tools to evaluate correctness on two image-based tasks: - ILSVRC 2012 (Image Classification) with top-K accuracy - COCO Object Detection (w/ bounding boxes) with mean Average Precision (mAP)

Task-Agnostic Evaluation For tasks where there isn't an established on-device evaluation tool, or if you are experimenting with custom models, TensorFlow Lite has the Inference Diff tool.

Tools

There are three different binaries which are supported. A brief description of each is provided below.

Inference Diff Tool

This binary compares TensorFlow Lite execution in single-threaded CPU inference and user-defined inference.

Image Classification Evaluation

This binary evaluates TensorFlow Lite models trained for the ILSVRC 2012 image classification task.

Object Detection Evaluation

This binary evaluates TensorFlow Lite models trained for the bounding box-based COCO Object Detection task.


For more information visit the TensorFlow Lite guide on Accuracy & correctness page.