I forgot about the trouble using '<' in pip dependencies on the CLI.
This time I verified the build works.
PiperOrigin-RevId: 318332763
Change-Id: I347aee8121464232222e72d89409c600159ee80c
Since `numpy==1.19.0` contains at least one breaking ABI change (numpy/numpy#15355), we need to either upper bound the dependency's range or fix our code to support both ABIs. Since the second requires more changes, we prefer the first one for now.
PiperOrigin-RevId: 317699730
Change-Id: Ia62a779f9ec42d63d3fac1b69cd75e6084358d2f
RELNOTES=Add a `tf.distribute.cluster_resolver.TPUClusterResolver.connect` API to simplify TPU initialization.
PiperOrigin-RevId: 317439811
Change-Id: I2f1a944f3c440356b21da27a72855c969f1c3b3b
The function is called on a distributed iterator and returns an `Optional` that contains the next value, the PerReplica input, from Distributed iterator or no value if this `iterator` has reached the end of the sequence.
PiperOrigin-RevId: 317248910
Change-Id: Ide217da1aff1d62f8d0d8f43423be2d859d933d3
It was previously hide in the **kwargs, and we are also missing documentation for it.
The existing test case should already cover the functionality of it.
PiperOrigin-RevId: 317197835
Change-Id: Icfae1e177eeb886b41345078f6b93f282a94df5b
This sampler supports broadcasting of its input parameters as well as puts the # samples at the left of the output shape, rather than the right.
PiperOrigin-RevId: 317129622
Change-Id: I4b62ad2e89a9637ae8b30b73af4b662ad0caa943
A new class LoadOptions is created similar to the existing SavedOptions. The option experimental_io_device is the only option added at this time and usd to set the io_device when loading a SavedModel for distributed training.
PiperOrigin-RevId: 316557681
Change-Id: If3f1eae18b09085ff11dc8a6882fabcb18f5f48e
- A data dump file set generated by tfdbg2 can contain
multiple subsets when there are multiple hosts involved
in the instrumented TensorFlow job (e.g., TPUs and Parameter Servers).
Currently, there is no bit in those subset of files that
indicates they belong to the same instrumented TF job.
- This CL addresses this problem by adding a field to the
metadata proto used by those files (`tfdbg_run_id`)
- The DebugEventsWriter code is revised, so that this new
field is written to the metadata file of the file set on the writer's
construction.
- Also in this CL: remove the previous 1-arg `GetDebugEventsWriter(dump_root)`
that creates the writer object if it doesn't exist at the specified
dump_root. Replace it with `LookUpDebugEventsWriter(dump_root)` that only
looks up the writer object and returns a non-OK status if such an object
hasn't been created at `dump_root`. This makes the code less error prone by
keeping only the fully-explicit, 3-arg `GetDebugEventsWriter()`.
PiperOrigin-RevId: 316537044
Change-Id: Id5be0b771fbf37c0fc796f1514ed858a0e6d38f0
This CL makes the following tf.data API-related changes:
1) `tf.data.Iterator` and `tf.data.IteratorSpec` are exposed in the v2 API
2) `tf.experimental.Optional` is exposed in the API (previously exposed as `tf.data.experimental.Optional`)
3) `tf.experimental.Optional.none_from_structure` and `tf.experimental.Optional.value_structure` is renamed to and `tf.experimental.Optional.empty` and `tf.experimental.Optional.element_spec` respectively
4) `tf.OptionalSpec.value_structure` is renamed to `tf.OptionalSpec.element_spec`
5) reflects these changes in documentation and code
6) adds testable docstring for newly exposed APIs
PiperOrigin-RevId: 316003328
Change-Id: I7b7e79942308b3d2f94b988c31729980fb69d961
This will make these built-in operators more amenable to dispatching for library developers.
This includes:
tf.__operators__.add
tf.__operators__.ne
tf.__operators__.eq
tf.__operators__.getitem
PiperOrigin-RevId: 315998480
Change-Id: Icf61e24a2c037eaf2c4d170967eb2b8ac18f5961
This API can get details about physical devices. Right now, only GPUs are supported, and the only fields are "name" and "compute_capability". The primary motivation is to determine whether mixed precision will run well, as it only results in significant speedups on GPUs with compute capability 7.0 and greater. In general, it's rare that querying device details is necessary, as TensorFlow runs most ops well on all devices, but mixed precision is an exception.
PiperOrigin-RevId: 315943445
Change-Id: I077fdc8f87a713ace74037fd2d82eede48740067