The fix consists in always creating a host tensor: the inputs to
ConvertToEagerTensor are host Python objects, so it makes sense that
the created tensor should be a host tensor too. The user can control
GPU copies by using tf.identity.
PiperOrigin-RevId: 297174740
Change-Id: I01f2aa9be3eb29fd49c7d81823e044db292b2d7c
Now iter(tensor) would raise an error if tensor is not iterable. Earlier, iter(tensor) never raised an error but next(iter(tensor)) could.
PiperOrigin-RevId: 293717639
Change-Id: Ic01280ff174478b4dbd2954163f9e76d8ed00d02
This allows to get rid of redundant H->D transfers. For example
tf.gather(tf.constant([42.0]), 0)
would previously allocate both [42.0] and 0 on CPU, and then transfer
both to GPU to compute Gather.
This could potentially hurt ops with inputs pinned to host e.g. Range.
PiperOrigin-RevId: 275252442
Change-Id: I7d85d3314b9701e7b9df76acea12c2fcfdf2960e
This allows to get rid of redundant H->D transfers. For example
tf.gather(tf.constant([42.0]), 0)
would previously allocate both [42.0] and 0 on CPU, and then transfer
both to GPU to compute Gather.
This could potentially hurt ops with inputs pinned to host e.g. Range.
PiperOrigin-RevId: 274806340
Change-Id: Id2797732ff3eb722302e4636b0bd6cc0ae0f3df1
1. When failed to compute a eager tensor, let the EagerTensor throw proper computation exception instead of ValueError.
2. Stop using StreamingEnqueueAsync for destroy tensor handle request.
With this change, turning on streaming rpc won't break dataset iterator anymore.
PiperOrigin-RevId: 267222175
This change removes a possibility for inconsistency between the global
reference to EagerContext and EagerTensor.* context= argument. Specifically
* EagerTensor.__init__ now fetches the context via GetPyEagerContext;
* EagerTensor._copy_to_device uses a EagerTensor.context.
PiperOrigin-RevId: 260364374
The only current use of this reference is to ensure that Python
deletes eager Context after deleting all tensors using it.
For tensors created from Python (by calling EagerTensor constructor),
this CL passes the whole Python Context object instead of just the pointer
to TFE_Context.
For tensors created from C++ (via EagerTensorFromHandle), this CL retrieves
the Context by calling the Python's context() method. I tried passing the
Context around to instead of retrieving it from Python, but it required a
fair amont of extra and mostly useless plumbing.
PiperOrigin-RevId: 259440004
Prior to this change the conversion logic was duplicated between
EagerTensor_init and ConvertToTensor in pywrap_tfe_src.cc
PiperOrigin-RevId: 258787751
While this change makes EagerTensor.numpy __array__ redundant, it
does not remove __array__ due to numpy/numpy#13507.
Note also that unlike __array__, the buffer interface does not lead
to a performance regression when np.array infers the dimensionality
of [tensor]. See #27692 and numpy/numpy#8562 for details.
PiperOrigin-RevId: 249888195
Previously, EagerTensor allowed lookups with non-scalar tensors, e.g.
>>> index = tf.constant([0])
>>> [42][index]
42
If this change breaks your code, apply tf.squeeze to the index tensor.
For the above example this would look like
>>> [42][tf.squeeze(index)]
42
PiperOrigin-RevId: 249245276
Two notes:
* Prior to this change it always produced a copy. See comment in
EagerTensor_numpy for details.
* EagerTensor.numpy still returns a copy to ensure no change of behavior.
This is likely to change in the followup CL.
PiperOrigin-RevId: 246378787
A number of tests used num_gpus() which in turn initializes the context.
This seems unintended and I plan to replace num_gpus() in a follow-up
change. For now let's remove the uses in tests as much as possible.
PiperOrigin-RevId: 242531782
With the addition of many tf.config APIs to modify the context we should
expose an explicit function to initialize the context. This forces
client that require the context to be setup first to fail rather than
implicitly setting up the context. Doing so makes it very clear when we
are initializing the eager context.
Some observations from this change are that function calls like
num_gpus() really shouldn't cause the context to be setup.
PiperOrigin-RevId: 242525826
Before this change:
Eager mode would always try to infer a dtype and convert it to int32 (since TF
prefers that), but graph would use the numpy dtype directly.
Eager would do this even if converting lists of scalars, but graph would
downcast.
After this change:
Eager and graph behave the same.
tf.convert_to_tensor(np.int64(1)).dtype == tf.int64
tf.convert_to_tensor([np.int64(1)]).dtype == tf.int32
PiperOrigin-RevId: 223823113
Numpy also casts non-bools to bool, so after this change EagerTensors that are not dtype bool can be converted to python bools. This also allows any shape EagerTensor with a single element to be converted to a python bool.
PiperOrigin-RevId: 223409869
Previously, eager would always cast all values to the requested dtype.
This didn't match graph mode, which would only allow casting values between
'compatible' types (e.g. all integer types are compatible with each other, but
no floating type is compatible with any integer type).
Graph mode uses _AssertCompatible (dc10ac4559/tensorflow/python/framework/tensor_util.py (L345))
to determine type compatibility. Eager mode type inference is a little
different.
After this CL, the intention is that constant_op.constant behave identically in graph and eager.
Note that this doesn't check correctly for overflows (in graph or eager). This means "tf.constant(544444, dtype=tf.uint8) < 200" will both return True.
PiperOrigin-RevId: 218717988
This fixes dir() calls on instances of eager tensors so that it correctly
accesses the __dict__ of EagerTensorType.
Earlier it would fail due to an infinite "loop" in subtype_dict: 7e610bcdf1/Objects/typeobject.c (L2145)
get_builtin_base_with_dict will return the same type (though I'm not sure this is reasonable behavior given its name).
The __dict__ getter for the type is subtype_dict creating an infinite tail recursion.
PiperOrigin-RevId: 212020695
This code still differs between py2 and py3 (__module__ returns
"__builtin__" in py2, and the correct value in py3) - but its strictly better
than before since earlier it would differ between py2 and py3 and generate an
error in py3. We don't seem to correctly initialize the tp_dict in py2, so even
when passing the correct, fully qualified name, we get back "__builtin__".
Fixes#20701
PiperOrigin-RevId: 204762170
becomes a float64 tensor.
Earlier py_seq_tensor would fall back to a float32 if not explicitly requesting
a float64 (which would not happen if we had no other information).
PiperOrigin-RevId: 197977260
Allow type-inference from a different input tensor, similar to args_to_matching_eager.
- Update TFE_Py_TensorShapeSlice to take tuples.
- Update int values to allow int/long in py2
END_PUBLIC
BEGIN_PUBLIC
Automated g4 rollback of changelist 192184809
PiperOrigin-RevId: 193696790
Allow type-inference from a different input tensor, similar to args_to_matching_eager.
- Update TFE_Py_TensorShapeSlice to take tuples.
- Update int values to allow int/long in py2
PiperOrigin-RevId: 192160407
* Previously, strong assumptions were made about how numpy.ndarrays
are formatted as strings. This led to breakages due to certain
unclear changes in numpy or its dependencies. This CL relaxes the
assumption and fix the affected tests for tfdbg and eager.
* The tests in tensor_format_test.py are simplified through helper
methods.
PiperOrigin-RevId: 181494182