Update tf32 documentation to mention complex64.
Currently, tf32 is only used with complex64 for batch matrix multiplications, but more cases will likely be added in the future PiperOrigin-RevId: 338385413 Change-Id: I70e7b123c05f00c83fc19d86dd29a171e3a0b865
This commit is contained in:
parent
9d266e05ac
commit
40a7f30b39
@ -11,7 +11,8 @@
|
||||
[TensorFloat-32](https://blogs.nvidia.com/blog/2020/05/14/tensorfloat-32-precision-format/).
|
||||
Specifically, inputs to such ops are rounded from 23 bits of precision to 10
|
||||
bits of precision. This is unlikely to cause issues in practice for deep
|
||||
learning models. TensorFloat-32 can be disabled by running
|
||||
learning models. In some cases, TensorFloat-32 is also used for complex64 ops.
|
||||
TensorFloat-32 can be disabled by running
|
||||
`config.experimental.enable_tensor_float_32_execution(False)`. The "Major
|
||||
Features and Improvements" section has more details.
|
||||
* The byte layout for string tensors across the C-API has been updated to match
|
||||
|
@ -81,6 +81,9 @@ def enable_tensor_float_32_execution(enabled):
|
||||
be added in the future. As a result, precision of float32 ops may decrease in
|
||||
minor versions of TensorFlow.
|
||||
|
||||
TensorFloat-32 is also used for some complex64 ops. Currently, TensorFloat-32
|
||||
is used in fewer cases for complex64 as it is for float32.
|
||||
|
||||
Args:
|
||||
enabled: Bool indicating whether to enable TensorFloat-32 execution.
|
||||
"""
|
||||
|
Loading…
Reference in New Issue
Block a user