The session returned by cached_session uses soft placement, something we don't
want for XLA_* devices. With soft placement ops lacking XLA kernels silently
fall back and run on the CPU, misleading us into thinking we have more test
coverage than we actually do. With this test some tests (rightly) start failing
because they were testing ops with dtypes the XLA kernels do not support. I've
removed these dtypes from the tests.
This CL partially addresses b/132430685. It stubs out "cached_session" and
"test_session" to raise errors, so we have more confidence that the compiler is
being exercised. However, we still use XLA_* devices to exercise XLA, which has
a different code path than xla.compile and tpu.rewrite. This needs to be
incrementally fixed.
PiperOrigin-RevId: 248437673
self.test_session() has been deprecated in 9962eb5e84 as its name confuses readers of the test. Moving to cached_session() instead which is more explicit about:
* the fact that the session may be reused.
* the session is not closed even when doing a "with self.test_session()" statement.
PiperOrigin-RevId: 209837298
Currently the implementation is fully unrolled, which can cause code size blowups at large matrix sizes. We can explore reducing code size in a subsequent change.
Create a new directory tensorflow/compiler/tf2xla/lib of XLA utility functions. Move batch matmul implementation into the utility directory. Add helpers for batch matmul, triangular solve, and Cholesky decomposition.
PiperOrigin-RevId: 175338698