2.5 KiB
2.5 KiB
API Updates
This page provides information about updates made to the
tf.lite.TFLiteConverter Python API in TensorFlow 2.x.
Note: If any of the changes raise concerns, please file a GitHub issue.
-
TensorFlow 2.3
- Support integer (previously, only float) input/output type for integer
quantized models using the new
inference_input_typeandinference_output_typeattributes. Refer to this example usage. - Support conversion and resizing of models with dynamic dimensions.
- Added a new experimental quantization mode with 16-bit activations and 8-bit weights.
- Support integer (previously, only float) input/output type for integer
quantized models using the new
-
TensorFlow 2.2
- By default, leverage MLIR-based conversion, Google's cutting edge compiler technology for machine learning. This enables conversion of new classes of models, including Mask R-CNN, Mobile BERT, etc and supports models with functional control flow.
-
TensorFlow 2.0 vs TensorFlow 1.x
- Renamed the
target_opsattribute totarget_spec.supported_ops - Removed the following attributes:
- quantization:
inference_type,quantized_input_stats,post_training_quantize,default_ranges_stats,reorder_across_fake_quant,change_concat_input_ranges,get_input_arrays(). Instead, quantize aware training is supported through thetf.kerasAPI and post training quantization uses fewer attributes. - visualization:
output_format,dump_graphviz_dir,dump_graphviz_video. Instead, the recommended approach for visualizing a TensorFlow Lite model is to use visualize.py. - frozen graphs:
drop_control_dependency, as frozen graphs are unsupported in TensorFlow 2.x.
- quantization:
- Removed other converter APIs such as
tf.lite.toco_convertandtf.lite.TocoConverter - Removed other related APIs such as
tf.lite.OpHintandtf.lite.constants(thetf.lite.constants.*types have been mapped totf.*TensorFlow data types, to reduce duplication)
- Renamed the