Without this init file, when trying to import tensorflow python code
from the source tree you get this error....
File "tensorflow/tensorflow/python/pywrap_tensorflow.py", line 25, in <module>
from tensorflow.python.platform import self_check
ImportError: No module named platform
With this init file you get a much clearer error message telling you
not to do this...
File "tensorflow/tensorflow/python/pywrap_tensorflow.py", line 25, in <module>
from tensorflow.python.platform import self_check
File "tensorflow/tensorflow/python/platform/self_check.py", line 27, in <module>
raise ImportError("Could not import tensorflow. Do not import tensorflow "
ImportError: Could not import tensorflow. Do not import tensorflow from its source directory; change directory to outside the TensorFlow source tree, and relaunch your Python interpreter from there.
PiperOrigin-RevId: 221662931
Change src_device_ to send_device and dst_device_ to recv_device. This complies
with tensorflow naming, so that VirtualScheduler can handle graphs generated
on inspectz with _Send/_Recv nodes and AutoGrappler does not need to remove
them.
PiperOrigin-RevId: 221660625
Also fixes a bug in KubernetesClusterResolver.master() where we were not getting the attribute correctly and added a test for it.
PiperOrigin-RevId: 221660325
Before this change, fractional_avg_pool and fractional_max_pool would take 3
randomness-related args: `seed`, `seed2`, and `deterministic`. The intended
behavior was to get a deterministic execution if `deterministic` was true using
`seed` and `seed2`.
After this change, these ops take a single `seed` arg. If `seed` is zero, the
execution is random. Otherwise, we use the graph-level random ops to generate
seed and seed2 from `seed`, and pass that to the kernel with deterministic set
to True.
PiperOrigin-RevId: 221648311
- We'd miscompile rewriteable slices that have other rewritable slices as
input. This was because we were caching the slice inputs from the first time
we looked at a rewriteable Slice when a rewrite could have changed one of the
inputs to that Slice. Fix this by not caching SliceInputs.
- We'd sometimes try to create (trivial) ConcatV2 nodes with one input, which
isn't legal. Fix this by not creating these trivial ConcatV2 nodes.
PiperOrigin-RevId: 221644004
Allocating tensors of the expected size is necessary for adding them
to TensorLists in the case of cond_v2 nested in while_v2.
PiperOrigin-RevId: 221637330
We use the terms reduced shape and unreduced shape to refer to the logical
shapes and the original shapes of a 0-2-1 tranpose. Since reduced shape and
unreduced shape could also refer to the result shape and source shape of a
reduction operation, the purpose of this CL is mainly to change the 0-2-1
transpose related code outside reduction implementation to use the words
normalized/unnormalized instead of reduced/unreduced. The reduction
implementation will be fixed in another CL that migrates the implementation to
use the kernel mapping scheme.
PiperOrigin-RevId: 221575772
This reduces the structure-handling boilerplate in `Dataset` implementations that have matching input and output structure.
PiperOrigin-RevId: 221569355
Also define a class-specific placement deallocation function for `ValueAndTensorBuffer`, because some compilers warn if it is not present.
PiperOrigin-RevId: 221562487
The implementation of `tf.data.Dataset` now depends on the version of
TensorFlow: in 1.x we export `DatasetV1` and in 2.x we export `DatasetV2`.
Currently, the internal `dataset_ops.Dataset` symbol maps to `DatasetV1`, but
this will change after all internal tests are updated to 2.x compatibility.
This change also removes the deprecated `Dataset.from_sparse_tensor_slices()`
method from `DatasetV2`, since its replacement has long been available in
`Dataset.from_tensor_slices()`.
PiperOrigin-RevId: 221560852
This reflects the two reasons why we decluster nodes:
- To reduce device to host copies
- To reduce recompilations
I'm about to add a third so I thought this cleanup made sense.
PiperOrigin-RevId: 221553369
copy this file over to tensorflow/ directory. Basically, we have tensorflow/__init__.py and tensorflow/compat/<v1 or v2>/__init__.py which is the same file. This is both confusing and causes issues.
PiperOrigin-RevId: 221552254
session_inter_op_thread_pool() option is enabled in SessionConfig. Use it while
running session-run(s) on the default inter_op_thread_pool() = 0.
PiperOrigin-RevId: 221549960