[XLA] [Docs] Document another known issue: dynamic TensorArrays are not supported

Moves "known_issues" into a separate page.

PiperOrigin-RevId: 322819265
Change-Id: I42d79810267c3dc8cede4ca9b16fb875d2c80430
This commit is contained in:
George Karpenkov 2020-07-23 10:50:26 -07:00 committed by TensorFlower Gardener
parent 226644ff51
commit 24a399e470
3 changed files with 34 additions and 24 deletions

View File

@ -17,6 +17,8 @@ upper_tabs:
path: /xla
- title: XLA architecture
path: /xla/architecture
- title: Known issues
path: /xla/known_issues
- title: Broadcasting semantics
path: /xla/broadcasting
- title: Develop a new backend for XLA

View File

@ -177,30 +177,6 @@ a bug to a single XLA program by using the
[`replay_computation`](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/compiler/xla/tools/run_hlo_module_main.cc)
and iteratively running it on generated programs.
## Known Issues
Compilation with XLA can greatly improve the performance of your programs, but
the TensorFlow interop has a number of known sharp corners.
### TensorArray TF/XLA Interconversion
The problem manifests itself as an error message
`Support for TensorList crossing the XLA/TF boundary is not implemented`.
XLA supports `tf.TensorArray`. However, the _interconversion_ between TF and
XLA representations is not implemented yet.
This error often arises when the `TensorArray` is used inside the compiled
block, but the derivative is taken outside.
Workaround: compile the outermost scope which is taking the derivative.
### Random Number Generation
XLA currently ignores TF seeds to random operations. This affects stateful TF
random operations, such as `tf.random.normal`, or `tf.nn.dropout`. XLA will
behave as if the compilation was seeded with a new unique seed at each run. This
limitation does not apply to stateless random ops.
## XLA Frontends
Apart from TensorFlow, XLA programs can be generated by:

View File

@ -0,0 +1,32 @@
# Known Issues
Compilation with XLA can greatly improve the performance of your programs, but
the TensorFlow interop has a number of known sharp corners.
## TensorArray TF/XLA interconversion
The problem manifests itself as an error message
`Support for TensorList crossing the XLA/TF boundary is not implemented`.
XLA supports `tf.TensorArray`. However, the _interconversion_ between TF and
XLA representations is not implemented yet.
This error often arises when the `TensorArray` is used inside the compiled
block, but the derivative is taken outside.
Workaround: compile the outermost scope which is taking the derivative.
## Dynamic `tf.TensorArray` is not supported
Writes into `tf.TensorArray(..., dynamic_size=True)` are not compilable with
XLA, as such writes require an unknown number of reallocations when the array
exceeds the original bound.
Workaround: provide a statically known bound to your arrays.
## Random number generation
XLA currently ignores TF seeds to random operations. This affects stateful TF
random operations, such as `tf.random.normal`, or `tf.nn.dropout`. XLA will
behave as if the compilation was seeded with a new unique seed at each run. This
limitation does not apply to stateless random ops.