STT-tensorflow/tensorflow/lite/tools/optimize/testdata
Mohamed Nour Abouelseoud ce2d218fc8 Add int8 and int16x8 support for BROADCAST_TO operator
* Added support for quantized broadcast_to operator
* Added tests for quantizing broadcast_to model
2021-01-06 16:34:28 +00:00
..
add_with_const_input.bin
argmax.bin
broadcast_to.bin Add int8 and int16x8 support for BROADCAST_TO operator 2021-01-06 16:34:28 +00:00
concat.bin
custom_op.bin
fc_qat.bin Avoid error with dequantize and extra quantize ops when combining post-training quantize with QAT and supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8] 2020-12-14 02:11:08 -08:00
fc.bin
lstm_calibrated2.bin
lstm_calibrated.bin
lstm_quantized2.bin
lstm_quantized.bin
maximum.bin
minimum.bin
mixed16x8.bin
mixed.bin
multi_input_add_reshape.bin
pack.bin
quantized_with_gather.bin
README.md
single_avg_pool_min_minus_5_max_plus_5.bin
single_conv_no_bias.bin
single_conv_weights_min_0_max_plus_10.bin
single_conv_weights_min_minus_127_max_plus_127.bin
single_softmax_min_minus_5_max_plus_5.bin
split.bin
svdf_calibrated.bin
svdf_quantized.bin
transpose.bin
unidirectional_sequence_lstm_calibrated.bin Add quantizer test for UnidirectionalSequenceLSTM. 2020-10-19 09:03:51 -07:00
unidirectional_sequence_lstm_quantized.bin Add quantizer test for UnidirectionalSequenceLSTM. 2020-10-19 09:03:51 -07:00
unpack.bin
weight_shared_between_convs.bin

Test models for testing quantization

This directory contains test models for testing quantization.

Models

  • single_conv_weights_min_0_max_plus_10.bin
    A floating point model with single convolution where all weights are integers between [0, 10] weights are randomly distributed. It is not guaranteed that min max for weights are going to appear in each channel. All activations have min maxes and activations are in range [0,10].
  • single_conv_weights_min_minus_127_max_plus_127.bin
    A floating point model with a single convolution where weights of the model are all integers that lie in range[-127, 127]. The weights have been put in such a way that each channel has at least one weight as -127 and one weight as 127. The activations are all in range: [-128, 127]. This means all bias computations should result in 1.0 scale.
  • single_softmax_min_minus_5_max_5.bin
    A floating point model with a single softmax. The input tensor has min and max in range [-5, 5], not necessarily -5 or +5.
  • single_avg_pool_input_min_minus_5_max_5.bin
    A floating point model with a single average pool. The input tensor has min and max in range [-5, 5], not necessarily -5 or +5.
  • weight_shared_between_convs.bin
    A floating point model with two convs that have a use the same weight tensor.
  • multi_input_add_reshape.bin
    A floating point model with two inputs with an add followed by a reshape.
  • quantized_with_gather.bin
    A floating point model with an input with a gather, modeling a situation of mapping categorical input to embeddings.