STT-tensorflow/tensorflow/lite/g3doc/models/object_detection/overview.md
Billy Lamberta e27d4fdcfc TF Lite models page v2
PiperOrigin-RevId: 235061579
2019-02-21 13:55:32 -08:00

9.4 KiB
Raw Blame History

Object detection

Detect multiple objects with bounding boxes. Yes, dogs and cats too.

Download starter model and labels

Tutorials (coming soon)

iOS Android

What is object detection?

Given an image or a video stream, an object detection model can identify which of a known set of objects might be present and provide information about their positions within the image.

For example, this screenshot of our object detection sample app shows how several objects have been recognized and their positions annotated:

TODO: Insert image

An object detection model is trained to detect the presence and location of multiple classes of objects. For example, a model might be trained with images that contain various pieces of computer hardware, along with a label that specifies the class of hardware they represent (e.g. a laptop, a keyboard, or a monitor), and data specifying where each object appears in the image.

When we subsequently provide an image to the model, it will output a list of the objects it detects, the location of a bounding box that contains each object, and a score that indicates the confidence that detection was correct.

Model output

Class Score Location
Laptop 0.92 [18, 21, 57, 63]
Keyboard 0.88 [100, 30, 180, 150]
Monitor 0.87 [7, 82, 89, 163]
Keyboard 0.23 [42, 66, 57, 83]
Monitor 0.11 [6, 42, 31, 58]

Confidence score

To interpret these results, we can look at the score and the location for each detected object. The score is a number between 0 and 1 that indicates confidence that the object was genuinely detected. The closer the number is to 1, the more confident the model is.

Depending on your application, you can decide a cut-off threshold below which you will discard detection results. For our example, we might decide a sensible cut-off is a score of 0.5 (meaning a 50% probability that the detection is valid). In that case, we would ignore the last two objects in the array, because those confidence scores are below 0.5:

Class Score Location
Laptop 0.92 [18, 21, 57, 63]
Keyboard 0.88 [100, 30, 180, 150]
Monitor 0.87 [7, 82, 89, 163]
Keyboard 0.23 [42, 66, 57, 83]
Monitor 0.11 [6, 42, 31, 58]

The cut-off you use should be based on whether you are more comfortable with false positives (objects that are wrongly identified, or areas of the image that are erroneously identified as objects when they are not), or false negatives (genuine objects that are missed because their confidence was low).

TODO: Insert screenshot showing both

Location

For each detected object, the model will return an array of four numbers representing a bounding rectangle that surrounds its position. The numbers are ordered as follows:

[ top, left, bottom, right ]

The top value represents the distance of the rectangles top edge from the top of the image, in pixels. The left value represents the left edges distance from the left of the input image. The other values represent the bottom and right edges in a similar manner.

Note: Object detection models accept input images of a specific size. This is likely to be different from the size of the raw image captured by your devices camera, and you will have to write code to crop and scale your raw image to fit the models input size (there are examples of this in our sample code).

The pixel values output by the model refer to the position in the cropped and scaled image, so you must scale them to fit the raw image in order to interpret them correctly.

Uses and limitations

The object detection model we provide can identify and locate up to 10 objects in an image. It is trained to recognize 80 classes of object. For a full list of classes, see the labels file in the model zip.

If you want to train a model to recognize new classes, see Customize model.

For the following use cases, you should use a different type of model:

  • Predicting which single label the image most likely represents (see image classification)
  • Predicting the composition of an image, for example subject versus background (see segmentation)

Get started If you are new to TensorFlow Lite and are working with Android or iOS, we recommend following the corresponding tutorial that will walk you through our sample code.

iOS Android

If you are using a platform other than Android or iOS, or you are already familiar with the TensorFlow Lite APIs, you can download our starter object detection model and the accompanying labels.

Download starter model and labels

The model will return 10 detection results...

Starter model

We recommend starting to implement object detection using the quantized COCO SSD MobileNet v1 model, available with labels from this download link:

Download starter model and labels

Input

The model takes an image as input. The expected image is 300x300 pixels, with three channels (red, blue, and green) per pixel. This should be fed to the model as a flattened buffer of 270,000 byte values (300x300x3). Since the model is quantized, each value should be a single byte representing a value between 0 and 255.

Output

The model outputs four arrays, mapped to the indices 0-4. Arrays 0, 1, and 2 describe 10 detected objects, with one element in each array corresponding to each object. There will always be 10 objects detected.

Index Name Description
0 Locations Multidimensional array of [10][4] floating point values between 0 and 1, the inner arrays representing bounding boxes in the form [top, left, bottom, right]
1 Classes Array of 10 integers (output as floating point values) each indicating the index of a class label from the labels file
2 Scores Array of 10 floating point values between 0 and 1 representing probability that a class was detected
3 Number and detections Array of length 1 containing a floating point value expressing the total number of detection results

Customize model

The pre-trained models we provide are trained to detect 80 classes of object. For a full list of classes, see the labels file in the model zip.

You can use a technique known as transfer learning to re-train a model to recognize classes not in the original set. For example, you could re-train the model to detect multiple types of vegetable, despite there only being one vegetable in the original training data. To do this, you will need a set of training images for each of the new labels you wish to train.

Learn how to perform transfer learning in the Training and serving a real-time mobile object detector in 30 minutes blog post.

Read more about this

  • Blog post:
  • Object detection GitHub: