Commit Graph

64 Commits

Author SHA1 Message Date
TensorFlower Gardener
3d868aa1c6 Merge pull request from wwwind:interface_16x8
PiperOrigin-RevId: 317232781
2020-06-18 19:47:52 -07:00
Elena Zhelezina
dcfc2175c7 Small change of comment per reviewer's note.
Change-Id: I1233b95282befebfa0e6c06173f5e928aef60b22
2020-06-09 22:35:57 +01:00
Elena Zhelezina
507c754931 Fix for pylint errors.
Change-Id: Idd96d7a41fd459c86ab0f6fbb63e5d543509145d
2020-06-09 17:47:16 +01:00
Elena Zhelezina
dbc7faeecd Addressed reviewer's comment.
Change-Id: I5bda332514d8070731b807b750ee7a423d6b4d78
2020-06-08 20:20:22 +01:00
Elena Zhelezina
9573987e54 Merge branch 'upstream/master' into interface_16x8
Change-Id: Ieb5783a0f182c92f003e7ae53da87ffbc2d62035
2020-06-08 17:03:06 +01:00
Elena Zhelezina
761d850ac6 Renamed option with the prefix EXPERIMENTAL_.
Change-Id: Idb84736507d5c07ebdf182b8a15d55906d0d7fc0
2020-06-03 17:56:39 +01:00
Feng Liu
2fc2651747 Expose the fully_quantize flag for the new mlir quantizer
PiperOrigin-RevId: 314547190
Change-Id: I14dfb095eefb5f9a565f726eb1ea760a8d0129b7
2020-06-03 09:42:47 -07:00
Elena Zhelezina
706dc11f1d
Merge branch 'master' into interface_16x8 2020-06-02 10:44:46 +01:00
Jared Duke
869920697b [tf.lite] Use in-process conversion when the new converter is used
Out-of-process conversion was a workaround for the legacy converter,
which would generally crash the process when conversion failed. However,
out-of-process conversion also adds a good deal of complexity, so avoid
it when using the new conversion backend.

PiperOrigin-RevId: 312142994
Change-Id: I7ddc83df99ccf24be6e15f46d6a116dce8321933
2020-05-18 13:33:05 -07:00
Feng Liu
d33cb73389 Expose inference type in the mlir quantizer
This is to prepare the 16 bits activation quantization release. The data type
specified by this flag is only applied on the activations.

PiperOrigin-RevId: 311478782
Change-Id: I5f63f0508011cc0b1b47a0debb35c17d3284eae9
2020-05-13 23:55:06 -07:00
Elena Zhelezina
0391c064f5 Merge branch 'upstream/master' into interface_16x8 2020-05-08 11:35:26 +01:00
A. Unique TensorFlower
5be613ef4f Expose disable_per_channel in MLIR to be used experimentally by tflite tooling
PiperOrigin-RevId: 310201122
Change-Id: I3fb460a182a23ae1cacb7f346d756a6e36eee748
2020-05-06 12:12:53 -07:00
Yunlu Li
7653317576 Use toco_pybind to invoke model sparsification logic for pip size purpose.
PiperOrigin-RevId: 308133900
Change-Id: If33c8319cfff6d9a618a17dc90f3e2c33333132c
2020-04-23 14:50:54 -07:00
Elena Zhelezina
e7b615dc2e
Merge branch 'master' into interface_16x8 2020-04-17 10:15:23 +01:00
Jaesung Chung
49b16040a7 Add verification code for input and output tensors in SavedModel importer
- Verify the given input and output names in the tf.entry_function in MLIR.
- Use input and output names with a colon in the saved model path.

PiperOrigin-RevId: 305810470
Change-Id: Id7f56ba216db2b60e6e1a11dbbcc0761a66b4635
2020-04-09 19:48:11 -07:00
Elena Zhelezina
8ea0aadd76
Merge branch 'master' into interface_16x8 2020-04-07 10:08:33 +01:00
Feng Liu
664d5b33bd Support _experimental_new_quantizer in the converter
if _experimental_new_quantizer is enabled, the converter will call the
calibration-only api from the post-training quantization calibrator, and then
invoke the mlir model quantizer api to quantize the model.

The mlir model quantizer is added via the toco pybind to reduce the binary size.

PiperOrigin-RevId: 305160381
Change-Id: Ib9f49dd36f3533f2e41b0565bbecb6591452c60c
2020-04-06 18:21:11 -07:00
Elena Zhelezina
abc4b57399
Merge branch 'master' into interface_16x8 2020-04-06 10:15:53 +01:00
Jaesung Chung
f21e640f0e Enable Keras/RNN case via MLIR SavedModel import in TFLiteConverterV2
PiperOrigin-RevId: 304694033
Change-Id: I3c2586b92e1b4a810036ed390cb5b4d83352bef8
2020-04-03 14:33:45 -07:00
Brian Zhao
9d89f59034 Automated g4 rollback of changelist 304482662.
PiperOrigin-RevId: 304649602
Change-Id: Iede54c48e16f5575d780a9d79d13b989aebabd9c
2020-04-03 10:58:19 -07:00
Elena Zhelezina
81e8675b45
Merge branch 'master' into interface_16x8 2020-04-03 10:44:47 +01:00
Feng Liu
637dde5d82 Support _experimental_new_quantizer in the converter
if _experimental_new_quantizer is enabled, the converter will call the
calibration-only api from the post-training quantization calibrator, and then
invoke the mlir model quantizer api to quantize the model.

The mlir model quantizer is added via the toco pybind to reduce the binary size.

PiperOrigin-RevId: 304482662
Change-Id: I1039cdc3e7f8fb244f9c2da73d96179f2a4f4985
2020-04-02 15:21:04 -07:00
A. Unique TensorFlower
2e229ed9e6 Enable Keras/RNN case via MLIR SavedModel import in TFLiteConverterV2
PiperOrigin-RevId: 304108750
Change-Id: Id6e4a3a1896e2a2a6e2c354646e3c782e430b007
2020-03-31 21:23:19 -07:00
Jaesung Chung
aad2694612 Enable Keras/RNN case via MLIR SavedModel import in TFLiteConverterV2
PiperOrigin-RevId: 304098351
Change-Id: Ide8275e35b1c59240f953bb614825f02eeca9a4b
2020-03-31 19:51:42 -07:00
Elena Zhelezina
eaffdc0340
Merge branch 'master' into interface_16x8 2020-03-03 10:18:40 +00:00
Pulkit Bhuwalka
867c320558 Support QAT conversion using TFLiteConverterV2
V1 converter requires inference_type, inference_input_type
and quantized_input_stats for conversion, whereas V2
converter utilizes FQ ops inside the graph for conversion
and input information.

This CL does the following
  1. Move the input_stats check from the convert code into
     the V1 converter since that is now specific to it.
  2. Improve the condition checking for post training calibrate
     vs weight only quantize vs training time quantize
  3. Actually handle training-time quantize by passing the
     necessary flags to TOCO.

Important to note, this appraoch leaves the option for both
QAT and post-training calibrate quantize to be applied together
in the same conversion.

PiperOrigin-RevId: 298533518
Change-Id: I48ec5b8db8f20242522ca7af70dcbe339b79aa2f
2020-03-02 23:10:10 -08:00
Elena Zhelezina
b2741883cc
Merge branch 'master' into interface_16x8 2020-02-20 18:55:48 +00:00
Nupur Garg
55912083e2 Add support for unknown dimensions to TFLite using MLIR converter.
PiperOrigin-RevId: 292563455
Change-Id: Ib5700cfe6faee177027329e32089abb3bcc9adaf
2020-01-31 09:56:54 -08:00
A. Unique TensorFlower
37ed82cae8 Add support for unknown dimensions to TFLite using MLIR converter.
PiperOrigin-RevId: 292406202
Change-Id: Id4c54d43b8d6c8f6372838c2c0b78f77c2b6c86c
2020-01-30 12:58:55 -08:00
Nupur Garg
5952526b97 Add support for unknown dimensions to TFLite using MLIR converter.
PiperOrigin-RevId: 292396256
Change-Id: I4384e39f4d80ae80fd75cc3f2b1cf61a305ab2e7
2020-01-30 12:04:46 -08:00
Elena Zhelezina
a7899d7544 Added an option TFLITE_BUILTINS_ACTIVATIONS_INT16_WEIGHTS_INT8 to
enable sym quantization with activations in 16-bit and weigths in 8-bit.
2020-01-27 14:01:11 +00:00
Feng Liu
d653517480 Requires quantized_input_stats when it is not post training quantization
If either inference_type or inference_input_type is set to int8/uint8 and it is
not post-training quantization, the quantized_input_stats is required.

PiperOrigin-RevId: 291441023
Change-Id: Iaee998f10dc90c66ddafc392de250d0f9234388c
2020-01-24 14:22:32 -08:00
Feng Liu
e775da7749 Fix some comments and specifications for int8 quantization.
The cl made the following changes:
- add int8 to all the related argument comments
- when the "inference_type" is int8, grappler optimization is disabled
- use "inference_type", instead of "inference_input_type" to verify quant stats is specified when it is not post-training quantization.

PiperOrigin-RevId: 285229735
Change-Id: Ie8da5c4d79fb60100c1041bd4573fe603cd304e6
2019-12-12 11:25:48 -08:00
Andrew Selle
5eea888e6f Give more helpful error when toco_from_protos binary is not found.
PiperOrigin-RevId: 281362420
Change-Id: I46d58f8859909da5121175e5595c4b61b79c9bd3
2019-11-19 16:02:49 -08:00
Nupur Garg
fab30f6349 Add flag --custom_opdefs to tflite_convert.
PiperOrigin-RevId: 280556264
Change-Id: Id9963930b26d55039c73993597ea0ab8ccc07d73
2019-11-14 18:03:22 -08:00
Pulkit Bhuwalka
cd128f0f30 Allow quantized stats to be used for new quant scheme.
quantized_input_stats is required to set quant parameters
for quantized inputs to a TFLite model. Currently, it is
only allowed for uint8, but we need to enable it for
int8 as well to support the new quantization scheme.

PiperOrigin-RevId: 278971599
Change-Id: I035ec4dc1529575dcdf59fe8d132248ac7496fc6
2019-11-06 17:27:24 -08:00
Haoliang Zhang
6b77079914 *Add flag 'conversion_summary_dir' to tflite_converter. When user passes this flag and uses the new MLIR converter(via command-line), it will generate conversion logs under the specified folder.
PiperOrigin-RevId: 278743450
Change-Id: Ic840a56642629514816582390a267b037b0bbb24
2019-11-05 17:48:15 -08:00
Jared Duke
a229786d02 Tweak comments for experimental conversion path
PiperOrigin-RevId: 274658710
2019-10-14 20:10:49 -07:00
Hye Soo Yang
c81c00b6ca ICM PY3 Migration - //tensorflow/lite [1]
PiperOrigin-RevId: 274090960
2019-10-10 22:12:18 -07:00
Edward Loper
97029c72c7 Fix type-checking bug from PR . That PR checked if a value was a unicode string using isinstance(debug_info_str, str). But in Python 2.x, str is the byte-string type. So check against bytes instead.
PiperOrigin-RevId: 259873125
2019-07-24 20:35:21 -07:00
TensorFlower Gardener
59ee7f9138 Merge pull request from ROCmSoftwarePlatform:google_upstream_no_rocm_updates_190711
PiperOrigin-RevId: 259704234
2019-07-24 02:47:39 -07:00
Deven Desai
2b50159ffe fixing a couple of unit-test failures that were being caused because the (python) code was passing strings instead of bytes 2019-07-18 18:39:06 +00:00
Nupur Garg
11d2cb7efb Fix toco_python_api check for valid GraphDefInfo in Python 3.
PiperOrigin-RevId: 258662584
2019-07-17 16:19:37 -07:00
Nupur Garg
8d9b34c4cd Propagate node debug information.
PiperOrigin-RevId: 257286387
2019-07-09 15:50:21 -07:00
A. Unique TensorFlower
cd7f680dcd Enable Float16 conversion of model constants through Python API
PiperOrigin-RevId: 256460833
2019-07-03 18:23:05 -07:00
Feng Liu
2c171cdb26 Collect node debug information for frozen graphs
This CL added the debug information support for the nodes in the frozen graphs
which are GraphDefs and will be sent to the new tf-tflite converter. A GraphDef
only serializes the node name from the original Graph object, but the whole
stack track defining the node will miss. So to collect the stack trace (debug
information) for the nodes in the GraphDef, a few changes made in this CL:

- For TFLiteConverter (v1), an experimental function, which create Graph Debug
  info from the original graph object, is passed to the converter constructor
  in addition to the GraphDef, so we can retrive the stack trace for the nodes
  from the GraphDef. (TFLiteConverterV2 isn't an issue because function object
  has passed to the constructor.)

- Propagate the original node name in the Grappler function inlining pass, so
  the original node name is stored in the GraphDef when a node is inlined. And
  we can use the stored name to look up the stack trace in the original graph.

- When a node name is looked up in the original graph, We need to consider the
  function library as well. For function libraries created by `@tf.function`
  and `@defun`, we use the sub-graphs in the original graph. However, function
  created by `@Defun` only has FunctionDef for the sub-graphs, so it isn't
  supported by this CL.

PiperOrigin-RevId: 253932770
2019-06-18 22:30:47 -07:00
Suharsh Sivakumar
78993c47e3 Add BUILTIN_INT8 support to gate integer only conversion.
PiperOrigin-RevId: 246077351
2019-04-30 21:35:40 -07:00
Tian Lin
f0fba04cd2 Automated rollback of commit 4ee64e012a
PiperOrigin-RevId: 244798543
2019-04-22 23:03:40 -07:00
Mark Daoust
70dfecf6a1 Apply tf1-tf2 renames to tensorflow/lite docstrings and comments.
No code changes, only doc-strings and comments.

PiperOrigin-RevId: 243841407
2019-04-16 11:23:05 -07:00
Nupur Garg
980ebdd6af Remove lite.OpHint, lite.experimental, and lite.constant from 2.0 API.
PiperOrigin-RevId: 243352761
2019-04-12 17:24:07 -07:00