The existing TF_AllocateTensor & TFE_NewTensorHandle APIs do not take a
TFE_Context which is undesirable as the TFE_Context indicates ownership
of the tensor. Thus we add new APIs to super-seed the existing ones.
PiperOrigin-RevId: 305126310
Change-Id: I9863ebc692d48875c61b79197ab418f29503a8c6
Adds an experimental C API for initializing TPUs and retrieving the topology, and runs it when initializing new logical devices in the eager context.
This does mean that running tf.tpu.experimental.initialize_tpu_system double-initializes the TPU system the first time if executed eagerly (the eager context initialization will always run first). I don't think this hurts too much.
My main motivation is to get ready for a device-agnostic distribution API: no "if TPU"s and the topology is available to intermediate APIs.
PiperOrigin-RevId: 289917514
Change-Id: I5fe22ef942bbdecac105d6a3fd7b9609b2e4e7bf
This code is no longer being used by swift, so it seems best to
remove it to simplify the uses of TensorHandle.
PiperOrigin-RevId: 281406036
Change-Id: Ib98a84cb767fe76eee6f41d7e86c02e2eee7e98f
symbolically executing a sequence of eager ops. This involves the following
changes:
1. Extended the eager C++ layer with symbolic tensor support, where a
tensorflow::TensorHandle represents a graph node, whose execution yields the
"concrete" tensor value. In contrast, the existing tensor handles are conrete
tensors.
Symbolic tensors can be created from a graph node (represented as TF_Output) via
TFE_NewTensorHandleFromTFOutput(). The associated graph node can be retrieved
from a symbolic tensor via TFE_GetTFOutputFromTensorHandle().
2. Added another experimental C API TFE_AddEagerOpToGraph() for symbolic
execution of a TFE_Op, which converts the op to a graph node, and the graph is
associated with a newly added TFE_TraceContext object to be passed into the C
API call.
3. If an eager op OP takes any concrete tensors as input,
TFE_AddEagerOpToGraph() will create symbolic tensors for them, to serve as input
to the converted graph node from OP. When executing that graph, those concrete
tensors will be fed as input.
Here's an example tracing use case illustrated by Swift code:
let x = Tensor<Float>(1.0)
let y = x + x
The first statement is executed on the host, yielding a concrete tensor value
for x.
The second statement is first symbolically executed to create a
graph (containing the "Add" node). Next, the graph is executed to compute y.
During this symbolic execution, TFE_AddEagerOpToGraph() is called on the "Add"
eager op for x + x, to insert an "Add" graph node. The node takes as input a
symbolic tensor (a PlaceHolder graph node) corresponding to x. When executing
the graph, the concrete tensor of x is then fed as the input.
TFE_AddEagerOpToGraph() does not yet support TF op attributes.
An example call-site code of this feature is Swift for TensorFlow at
https://github.com/apple/swift/pull/21589 (where some of the above C APIs are
subject to name changes).
PiperOrigin-RevId: 230018312