This enables a new mode of reading from the tf.data service, where consumers read from tasks in a coordinated fashion, instead of the normal first-come first-served. The main use case for this is coordinated bucketization for synchronous training, where we want to ensure that at each step consumers get batches with elements of similar sizes. This mitigates the inefficiency of some consumers slowly training on large examples while others quickly train on small examples, then block waiting for the slower examples to be processed. When `consumer_index` and `num_consumers` are specified to `distribute`, each task will enforce a strict round-robin order, where its first element goes to consumer 0, second element to consumer 1, and so on. This requires that all consumers consume the same number of elements. PiperOrigin-RevId: 351625063 Change-Id: I9b400f55ad61406cb125af8225096e7ff5dc4b0c |
||
---|---|---|
.. | ||
base_api | ||
java_api | ||
python_api | ||
api_test.cc | ||
BUILD | ||
excluded_ops.cc | ||
excluded_ops.h | ||
README.md | ||
update_api_def_main.cc | ||
update_api_def_test.cc | ||
update_api_def.cc | ||
update_api_def.h | ||
update_api_def.sh |
This folder contains the ApiDef proto definitions of TensorFlow operations.
The canonical source of documentation for these operations can be found in the base_api/ directory.