STT-tensorflow/tensorflow/lite/tools/optimize/testdata
TensorFlower Gardener 3d868aa1c6 Merge pull request #36251 from wwwind:interface_16x8
PiperOrigin-RevId: 317232781
2020-06-18 19:47:52 -07:00
..
add_with_const_input.bin
argmax.bin
concat.bin Support Concat in quantizer. 2019-04-04 09:16:31 -07:00
custom_op.bin
fc.bin
lstm_calibrated2.bin Add quantization support to all variants of LSTM. 2019-12-06 11:53:15 -08:00
lstm_calibrated.bin
lstm_quantized2.bin Add quantization support to all variants of LSTM. 2019-12-06 11:53:15 -08:00
lstm_quantized.bin
maximum.bin Fix incorrect min/max models to be float instead of uint8. 2019-12-18 11:08:46 -08:00
minimum.bin Fix incorrect min/max models to be float instead of uint8. 2019-12-18 11:08:46 -08:00
mixed16x8.bin Added non-strict mode for 16x8 quantization 2020-02-20 15:44:29 +00:00
mixed.bin
multi_input_add_reshape.bin Add Reshape and Add support to quantizer. 2019-02-28 11:05:51 -08:00
pack.bin [tflite] Fix and tests for the operator PACK 2019-11-27 12:41:38 +00:00
quantized_with_gather.bin
README.md
single_avg_pool_min_minus_5_max_plus_5.bin
single_conv_no_bias.bin Skip over optional bias tensors. 2019-12-12 12:16:00 -08:00
single_conv_weights_min_0_max_plus_10.bin
single_conv_weights_min_minus_127_max_plus_127.bin
single_softmax_min_minus_5_max_plus_5.bin
split.bin
svdf_calibrated.bin Add quantizer for SVDF. 2019-12-23 11:26:02 -08:00
svdf_quantized.bin Add quantizer for SVDF. 2019-12-23 11:26:02 -08:00
transpose.bin Add quantization test for transpose. 2020-06-18 09:29:54 -07:00
unpack.bin
weight_shared_between_convs.bin Enable opensource tests. 2019-01-25 14:36:38 -08:00

Test models for testing quantization

This directory contains test models for testing quantization.

Models

  • single_conv_weights_min_0_max_plus_10.bin
    A floating point model with single convolution where all weights are integers between [0, 10] weights are randomly distributed. It is not guaranteed that min max for weights are going to appear in each channel. All activations have min maxes and activations are in range [0,10].
  • single_conv_weights_min_minus_127_max_plus_127.bin
    A floating point model with a single convolution where weights of the model are all integers that lie in range[-127, 127]. The weights have been put in such a way that each channel has at least one weight as -127 and one weight as 127. The activations are all in range: [-128, 127]. This means all bias computations should result in 1.0 scale.
  • single_softmax_min_minus_5_max_5.bin
    A floating point model with a single softmax. The input tensor has min and max in range [-5, 5], not necessarily -5 or +5.
  • single_avg_pool_input_min_minus_5_max_5.bin
    A floating point model with a single average pool. The input tensor has min and max in range [-5, 5], not necessarily -5 or +5.
  • weight_shared_between_convs.bin
    A floating point model with two convs that have a use the same weight tensor.
  • multi_input_add_reshape.bin
    A floating point model with two inputs with an add followed by a reshape.
  • quantized_with_gather.bin
    A floating point model with an input with a gather, modeling a situation of mapping categorical input to embeddings.