[XLA/Docs] Update known issues list

PiperOrigin-RevId: 340502704
Change-Id: I0ddc1278852cc82622f4f4ff84faaf8613103d6a
This commit is contained in:
George Karpenkov 2020-11-03 12:37:37 -08:00 committed by TensorFlower Gardener
parent 6c315885df
commit 5feee72aff

View File

@ -5,7 +5,7 @@ the TensorFlow interop has a number of known sharp corners.
## TensorArray TF/XLA interconversion
The problem manifests itself as an error message
*Error message*:
`Support for TensorList crossing the XLA/TF boundary is not implemented`.
XLA supports `tf.TensorArray`. However, the _interconversion_ between TF and
@ -13,15 +13,31 @@ XLA representations is not implemented yet.
This error often arises when the `TensorArray` is used inside the compiled
block, but the derivative is taken outside.
Workaround: compile the outermost scope which is taking the derivative.
*Workaround*: compile the outermost scope which is taking the derivative.
## Dynamic `tf.TensorArray` is not supported
## TensorFlow while loops need to be bounded (or have backprop disabled)
*Error message*: `XLA compilation requires a fixed tensor list size. Set the max
number of elements. This could also happen if you're using a TensorArray in a
while loop that does not have its maximum_iteration set, you can fix this by
setting maximum_iteration to a suitable value`.
TF while [loops](https://www.tensorflow.org/api_docs/python/tf/while_loop)
created using `tf.while_loop` support backpropagation by accumulating all
intermediate results in a `TensorArray`, but XLA only supports bounded
`TensorArray`s.
*Workaround*: all compiled while loops need to either have `maximum_iterations`
parameter set to a constant value known at compile time, or backpropagation
disabled using `back_prop=False`.
## Dynamic `tf.TensorArray`
Writes into `tf.TensorArray(..., dynamic_size=True)` are not compilable with
XLA, as such writes require an unknown number of reallocations when the array
exceeds the original bound.
Workaround: provide a statically known bound to your arrays.
*Workaround*: provide a statically known bound to your arrays.
## Random number generation
@ -29,13 +45,3 @@ XLA currently ignores TF seeds to random operations. This affects stateful TF
random operations, such as `tf.random.normal`, or `tf.nn.dropout`. XLA will
behave as if the compilation was seeded with a new unique seed at each run. This
limitation does not apply to stateless random ops.
## TensorFlow while loops need to be bounded (or have backprop disabled)
TF while [loops](https://www.tensorflow.org/api_docs/python/tf/while_loop)
created using `tf.while_loop` support backpropagation by accumulating all
intermediate results in a `TensorArray`.
Since XLA only supports bounded `TensorArray`s, all compiled while loops need to
either have `maximum_iterations` parameter set to a constant value known at
compile time, or backpropagation disabled using `back_prop=False`.