STT-tensorflow/tensorflow/lite/tools/optimize
Robert David 429c0b423e Integer LSTMs: Name scratch arrays Based on what gate they are representing. Make naming consistent with float/hybrid versions.
PiperOrigin-RevId: 317420201
Change-Id: Ia9447e51fce1530e75103c4db3759908592af983
2020-06-19 19:33:41 -07:00
..
calibration Integer LSTMs: Name scratch arrays Based on what gate they are representing. Make naming consistent with float/hybrid versions. 2020-06-19 19:33:41 -07:00
g3doc
python Fix misspelling 2020-04-29 21:10:09 +09:00
sparsity Fix a bug in the format converter. 2020-04-29 22:04:35 -07:00
testdata Merge pull request #36251 from wwwind:interface_16x8 2020-06-18 19:47:52 -07:00
BUILD Merge pull request #36251 from wwwind:interface_16x8 2020-06-18 19:47:52 -07:00
model_utils_test.cc Strengthen IsQuantized rule to actually check quantization params rather than just looking at the type. 2019-06-17 09:55:06 -07:00
model_utils.cc Modify op version in optimize only if convertor version < quantized version. 2020-05-12 15:44:28 -07:00
model_utils.h Support unknown dimensions in quantized models. 2020-05-06 18:30:12 -07:00
modify_model_interface_main.cc fix the typo 2020-04-22 17:31:05 -07:00
modify_model_interface_test.cc Update test case for modify model interface. The input and output float tensors are in the beginning and end of the model respectively. 2020-06-08 13:12:55 -07:00
modify_model_interface.cc Add a safeguard for tensor removal. If a tensor to be removed is at the beginning of the tensor list, keep it. 2020-06-02 16:07:26 -07:00
modify_model_interface.h Create a helper function to change a float models's interface to uint8. This is for users to use on inputs, rather than relying on infererence_input and inference_output type in the 2.0 converter. 2020-03-23 09:51:46 -07:00
operator_property.cc Full int8 quantization BatchMatMul 2020-06-19 07:33:08 -07:00
operator_property.h Merge branch 'master' into interface_16x8 2020-06-02 10:44:46 +01:00
quantization_utils_test.cc Added an option TFLITE_BUILTINS_ACTIVATIONS_INT16_WEIGHTS_INT8 to 2020-01-27 14:01:11 +00:00
quantization_utils.cc Merge pull request #36251 from wwwind:interface_16x8 2020-06-18 19:47:52 -07:00
quantization_utils.h Added an option TFLITE_BUILTINS_ACTIVATIONS_INT16_WEIGHTS_INT8 to 2020-01-27 14:01:11 +00:00
quantization_wrapper_utils_test.cc Refactor test to enable support of other variants of LSTM. 2019-11-26 11:38:07 -08:00
quantization_wrapper_utils.cc Add utility functions for integration. This supports the calibration case that the model is initialized multiple times. 2019-11-14 15:03:39 -08:00
quantization_wrapper_utils.h Add utility functions for integration. This supports the calibration case that the model is initialized multiple times. 2019-11-14 15:03:39 -08:00
quantization_wrapper.cc Merge pull request #36251 from wwwind:interface_16x8 2020-06-18 19:47:52 -07:00
quantization_wrapper.h Add utility functions for integration. This supports the calibration case that the model is initialized multiple times. 2019-11-14 15:03:39 -08:00
quantize_model_test.cc Merge pull request #36251 from wwwind:interface_16x8 2020-06-18 19:47:52 -07:00
quantize_model.cc Merge branch 'upstream/master' into interface_16x8 2020-06-08 17:03:06 +01:00
quantize_model.h Fix for the broken 16-bit interface after latest changes to master. 2020-03-27 20:14:56 +00:00
quantize_weights_test.cc Change comment of the external repo #include file origin. 2020-03-23 10:42:53 -07:00
quantize_weights.cc Support unknown dimensions in quantized models. 2020-05-06 18:30:12 -07:00
quantize_weights.h Support quantization to float16 2019-05-17 14:57:42 -07:00
test_util.cc Merge pull request #36251 from wwwind:interface_16x8 2020-06-18 19:47:52 -07:00
test_util.h Merge pull request #36251 from wwwind:interface_16x8 2020-06-18 19:47:52 -07:00