diff --git a/RELEASE.md b/RELEASE.md index 17434a840b4..f2d3c3c6efe 100644 --- a/RELEASE.md +++ b/RELEASE.md @@ -11,7 +11,8 @@ [TensorFloat-32](https://blogs.nvidia.com/blog/2020/05/14/tensorfloat-32-precision-format/). Specifically, inputs to such ops are rounded from 23 bits of precision to 10 bits of precision. This is unlikely to cause issues in practice for deep - learning models. TensorFloat-32 can be disabled by running + learning models. In some cases, TensorFloat-32 is also used for complex64 ops. + TensorFloat-32 can be disabled by running `config.experimental.enable_tensor_float_32_execution(False)`. The "Major Features and Improvements" section has more details. * The byte layout for string tensors across the C-API has been updated to match diff --git a/tensorflow/python/framework/config.py b/tensorflow/python/framework/config.py index 4ad090eaf96..2691665ffce 100644 --- a/tensorflow/python/framework/config.py +++ b/tensorflow/python/framework/config.py @@ -81,6 +81,9 @@ def enable_tensor_float_32_execution(enabled): be added in the future. As a result, precision of float32 ops may decrease in minor versions of TensorFlow. + TensorFloat-32 is also used for some complex64 ops. Currently, TensorFloat-32 + is used in fewer cases for complex64 as it is for float32. + Args: enabled: Bool indicating whether to enable TensorFloat-32 execution. """