STT-tensorflow/tensorflow/lite/tools/optimize
Taehee Jeong a7c1a23ebb Refactor some quantization methods
Refactored Bias(int32) and int16 quantization methods to factor out quantization logic to separate function. These will be used in MLIR quantizer's legacy mode.

PiperOrigin-RevId: 352176296
Change-Id: I54c975ad3ba348f2b2cf77290772aff345865b63
2021-01-16 07:10:35 -08:00
..
calibration Internal changes on cleanup. 2020-11-19 11:35:13 -08:00
g3doc
python Update the usage of the 'absl-py' python library 2020-11-25 17:54:34 -08:00
sparsity Remove unused include. 2020-12-09 18:35:04 -08:00
testdata Add int8 and int16x8 support for BROADCAST_TO operator 2021-01-06 16:34:28 +00:00
BUILD Refactor some quantization methods 2021-01-16 07:10:35 -08:00
model_utils_test.cc
model_utils.cc Move Model read/write methods to model_utils.h 2021-01-05 22:42:51 -08:00
model_utils.h Move Model read/write methods to model_utils.h 2021-01-05 22:42:51 -08:00
modify_model_interface_main.cc
modify_model_interface_test.cc Move Model read/write methods to model_utils.h 2021-01-05 22:42:51 -08:00
modify_model_interface.cc Move Model read/write methods to model_utils.h 2021-01-05 22:42:51 -08:00
modify_model_interface.h Update comments for ModifyModelInterface. 2020-09-08 15:33:44 -07:00
operator_property.cc Merge pull request from MohamedNourArm:toupstream/broadcast_to 2021-01-13 15:15:05 -08:00
operator_property.h Clarify use case of OperatorProperty::restrict_scale 2020-12-01 08:18:22 -08:00
quantization_utils_test.cc Clamp f32->f16 quantization to max/min range of float16 2020-10-28 17:18:42 -07:00
quantization_utils.cc Refactor some quantization methods 2021-01-16 07:10:35 -08:00
quantization_utils.h Refactor some quantization methods 2021-01-16 07:10:35 -08:00
quantization_wrapper_utils_custom_test.cc Support calibration of models with 8bit matmul output. 2020-10-30 08:20:41 -07:00
quantization_wrapper_utils_test.cc Fix model_utils::GetOrInsertOpCodeIndex() method 2020-10-12 15:01:40 -07:00
quantization_wrapper_utils.cc Support calibration of models with 8bit matmul output. 2020-10-30 08:20:41 -07:00
quantization_wrapper_utils.h
quantization_wrapper.cc
quantization_wrapper.h
quantize_model_test.cc Merge pull request from MohamedNourArm:toupstream/broadcast_to 2021-01-13 15:15:05 -08:00
quantize_model.cc Merge pull request from wwwind:16x8_addsub_amend 2021-01-11 23:55:13 -08:00
quantize_model.h
quantize_weights_test.cc Refactor reading builtin code in TFLite 2020-10-05 15:36:21 -07:00
quantize_weights.cc Update Batch_MatMul op version during conversion. 2020-12-11 13:28:16 -08:00
quantize_weights.h
test_util.cc Add int8 and int16x8 support for BROADCAST_TO operator 2021-01-06 16:34:28 +00:00
test_util.h Add int8 and int16x8 support for BROADCAST_TO operator 2021-01-06 16:34:28 +00:00