STT-tensorflow/tensorflow/python/distribute
A. Unique TensorFlower 034633f23b PY2 removal cleanup
PiperOrigin-RevId: 352106691
Change-Id: I382d53c64f0d29da430b8cb6d2395a2cb281509e
2021-01-15 16:48:57 -08:00
..
cluster_resolver PY2 removal cleanup 2021-01-15 16:48:57 -08:00
coordinator PY2 removal cleanup 2021-01-15 16:48:57 -08:00
experimental PY2 removal cleanup 2021-01-15 16:48:57 -08:00
integration_test Raise meaningful error message when loading a ShardedVariable. 2020-12-21 15:59:59 -08:00
parallel_device PY2 removal cleanup 2021-01-15 16:48:57 -08:00
v1 PY2 removal cleanup 2021-01-15 16:48:57 -08:00
BUILD PY2 removal cleanup 2021-01-15 16:48:57 -08:00
central_storage_strategy.py Merge pull request from kushanam:distribute_dali_ctl 2020-10-19 09:25:22 -07:00
checkpoint_utils_test.py
checkpointing_test.py Add callable wrapper to CheckpointValueInitializer so that we can delay the variable restore until after variable creation scopes have been called. 2020-09-01 15:42:47 -07:00
collective_all_reduce_strategy_test.py Remove enable collective ops tests 2020-11-24 14:27:20 -08:00
collective_all_reduce_strategy.py Set a timeout to check health RPC 2020-10-21 13:02:25 -07:00
collective_util_test.py Fix constructor of CommunicationOptions 2020-11-11 13:17:33 -08:00
collective_util.py Fix constructor of CommunicationOptions 2020-11-11 13:17:33 -08:00
combinations_test.py Add convenient methods to write test combinations with and without tf.function 2021-01-12 12:30:57 -08:00
combinations.py Add convenient methods to write test combinations with and without tf.function 2021-01-12 12:30:57 -08:00
cross_device_ops_test.py Enable NCCL for all all-reduces 2020-11-24 14:55:12 -08:00
cross_device_ops.py Internal symbol name change. 2021-01-05 14:15:45 -08:00
cross_device_utils_test.py Refactor collective utils to be of one replica 2020-10-13 20:02:22 -07:00
cross_device_utils.py Enable NCCL for all all-reduces 2020-11-24 14:55:12 -08:00
custom_training_loop_gradient_test.py
custom_training_loop_input_test.py Rename "experimental_distribute_datasets_from_function" to "distribute_datasets_from_function". 2020-09-23 18:15:32 -07:00
device_util_test.py Try to deduce job, replica and task from config.list_logical_devices() again 2020-06-16 15:22:24 -07:00
device_util.py Use __slots__ for small classes 2020-06-28 18:41:22 +02:00
distribute_config.py
distribute_coordinator_context.py
distribute_coordinator_test.py Skip creating std server for evaluator. 2021-01-05 10:52:49 -08:00
distribute_coordinator.py Skip creating std server for evaluator. 2021-01-05 10:52:49 -08:00
distribute_lib_test.py Graduate experimental_hints to options in all_reduce/reduce/batch_reduce 2020-10-16 11:54:24 -07:00
distribute_lib.py Move the LossReduction class from tf to Keras. 2021-01-13 13:40:14 -08:00
distribute_utils_test.py Expand distribute_utils.regroup to work with collections.abc.Mapping-derived containers. 2021-01-15 12:46:22 -08:00
distribute_utils.py Expand distribute_utils.regroup to work with collections.abc.Mapping-derived containers. 2021-01-15 12:46:22 -08:00
distribution_strategy_context.py Generate replica_id tensor at call time 2020-07-27 19:21:33 -07:00
estimator_training.py
input_lib_test.py Fix flakiness in input_lib_test. 2021-01-14 14:27:13 -08:00
input_lib_type_spec_test.py Merge pull request from kushanam:keras_distribute_lib 2020-11-12 16:52:16 -08:00
input_lib.py Internal symbol name change. 2021-01-05 14:15:45 -08:00
input_ops_test.py
input_ops.py [tf.data + tf.distribute] Use RebatchDataset instead of LegacyRebatchDataset in distribution strategies when global batch size can be statically determined. 2020-09-30 12:18:30 -07:00
metrics_v1_test.py
mirrored_run.py Return the correct replica id within a sync group for MWMS. Currently we return the local replica id within a worker as opposed to within a sync group. 2020-10-09 13:20:06 -07:00
mirrored_strategy_test.py Retire MultiWorkerAllReduce 2020-08-27 00:12:37 -07:00
mirrored_strategy.py Turn on VariablePolicy for MirroredStrategy. 2020-10-29 14:41:21 -07:00
mirrored_variable_test.py Use utility to identify OnWrite and OnRead synchronized variables. 2020-07-27 14:14:19 -07:00
moving_averages_test.py Set 2 virtual cpus and 2 virtual gpus by default for test cases. 2020-11-03 16:57:08 -08:00
multi_process_lib.py Update multi_process_lib to handle file path for OSS keras build/test. 2020-12-07 15:19:40 -08:00
multi_process_runner_no_init_test.py TF Internal API: tf_export a few distribute-related symbols: 2020-10-07 14:38:53 -07:00
multi_process_runner_test.py Re-enable multi process pool runner tests 2020-10-26 11:58:41 -07:00
multi_process_runner.py MultiProcessPoolRunner: Comment correction as we're longer using atexit. Upon testing it seems we don't need _shutdown_all_pool_runners at the end of _pool_runner_worker either now. 2020-10-26 14:02:51 -07:00
multi_worker_continuous_run_test.py MultiProcessRunner: symbol replacement: barrier->get_barrier 2020-10-07 10:51:25 -07:00
multi_worker_test_base_test.py Use MPR for fault tolerance test 2020-08-21 00:08:42 -07:00
multi_worker_test_base.py [*.py,tensorflow/cc/framework/cc_op_gen.cc] Rename "Arguments:" to "Args:" 2020-12-22 09:24:04 +11:00
multi_worker_util_test.py Move away from deprecated asserts 2020-06-30 16:10:22 -07:00
multi_worker_util.py PSv2: Check that there is no more than one chief, and at least one ps/worker. Combine the validation logic with multi_worker_util. 2020-11-10 18:37:30 -08:00
numpy_dataset_test.py
numpy_dataset.py
one_device_strategy_test.py Add InputOption support to all remaining strategies. 2020-06-24 16:20:39 -07:00
one_device_strategy.py Merge pull request from kushanam:distribute_dali_ctl 2020-10-19 09:25:22 -07:00
packed_distributed_variable_test.py Support packed variable in DistributedVariable. Add an option to enable packed variable in TPUStrategy. 2020-06-18 20:12:02 -07:00
packed_distributed_variable.py Return the primary handle when it's in graph mode and not under a tpu context. 2020-12-15 22:10:51 -08:00
parameter_server_strategy_test.py fix typos in python directory 2020-10-29 16:21:24 +03:00
parameter_server_strategy_v2_test.py PSv2: Add checks that ParameterServerStrategy's run, reduce, experimental_distribute_dataset, and distribute_datasets_from_function are used with a ClusterCoordinator, and that run and reduce need to be used within a function that is used with schedule. 2020-11-25 12:30:24 -08:00
parameter_server_strategy_v2.py Raise meaningful error message when loading a ShardedVariable. 2020-12-21 15:59:59 -08:00
parameter_server_strategy.py PSv2: Dedup the legacy ParameterServerStrategy class (as the estimator usage of it uses ParameterServerStrategyV1). 2020-10-21 12:16:22 -07:00
ps_values_test.py Replace usages of Tensorflow DistributionStrategy method experimental_run_v2 with run. 2020-06-29 11:22:53 -07:00
ps_values.py [TF DistStrat] Add support for deepcopy on AggregatingVariable (PS) 2020-08-19 08:57:16 -07:00
random_generator_test.py Allows creating tf.random.Generator under distribution-strategy scopes. Different replicas will get different random-number streams. 2021-01-08 15:58:22 -08:00
README.md Graduate TPUStrategy from experimental. 2020-06-20 13:10:50 -07:00
reduce_util.py
remote_mirrored_strategy_eager_test.py
sharded_variable_test.py Raise meaningful error message when loading a ShardedVariable. 2020-12-21 15:59:59 -08:00
sharded_variable.py Raise meaningful error message when loading a ShardedVariable. 2020-12-21 15:59:59 -08:00
shared_variable_creator_test.py Move away from deprecated asserts 2020-06-30 16:10:22 -07:00
shared_variable_creator.py
single_loss_example.py Update minimize_loss_test to not rely on Keras. 2020-07-07 21:39:06 -07:00
step_fn.py
strategy_combinations_test.py Create different strategy based on TF1/2 in strategy_combinations 2020-10-09 17:02:10 -07:00
strategy_combinations.py Only call initialize_tpu_system once per process. 2020-10-28 17:31:50 -07:00
strategy_common_test.py Split strategy_common_test into two pieces as this test is currently timing out. 2020-10-13 10:11:48 -07:00
strategy_gather_test.py Fix and test all_gather gradient. 2020-10-21 03:47:11 -07:00
strategy_test_lib.py Remove numpy_datasets from V2 strategies 2020-10-12 14:30:17 -07:00
summary_op_util.py
test_util_test.py Order NCCL all-reduce with ordering token 2020-11-11 11:18:30 -08:00
test_util.py Order NCCL all-reduce with ordering token 2020-11-11 11:18:30 -08:00
tf_function_test.py Always retrace in tf.saved_model.save 2020-10-10 12:18:19 -07:00
tpu_strategy_compilation_test.py Pass non empty MLIR module serialized string when constructing TpuCompilationCacheKey. 2020-07-24 16:40:48 -07:00
tpu_strategy_test.py Preserve TPUDistributedVariables passed to TPUStrategy.run(). 2021-01-13 19:22:09 -08:00
tpu_strategy.py Preserve TPUDistributedVariables passed to TPUStrategy.run(). 2021-01-13 19:22:09 -08:00
tpu_values.py Return the primary handle when it's in graph mode and not under a tpu context. 2020-12-15 22:10:51 -08:00
values_test.py Disallow saving if the function cannot be used for inference 2020-10-15 21:08:51 -07:00
values_util.py Disallow saving if the function cannot be used for inference 2020-10-15 21:08:51 -07:00
values.py Turn on VariablePolicy for MirroredStrategy. 2020-10-29 14:41:21 -07:00
vars_test.py Add test_util.main() and test_util.set_logical_devices_to_at_least() 2020-10-06 16:30:51 -07:00
warm_starting_util_test.py
zero_batch_test.py

Tensorflow Distribute Libraries

Overview

tf.distribute.Strategy is a TensorFlow API to distribute training across multiple GPUs, multiple machines or TPUs. Using this API, users can distribute their existing models and training code with minimal code changes.

It can be used with TensorFlow's high level APIs, tf.keras and tf.estimator, with just a couple of lines of code change. It does so by changing the underlying components of TensorFlow to become strategy-aware. This includes variables, layers, models, optimizers, metrics, summaries, and checkpoints.

Documentation

Distributed Training Guide

Distributed Training With Keras Tutorial

Distributed Training With Custom Training Loops Tutorial

Multiworker Training With Keras Tutorial

Multiworker Training With Estimator Tutorial

Save and Load with Distribution Strategy

Simple Examples

Using compile fit with GPUs.

# Create the strategy instance. It will automatically detect all the GPUs.
mirrored_strategy = tf.distribute.MirroredStrategy()

# Create and compile the keras model under strategy.scope()
with mirrored_strategy.scope():
  model = tf.keras.Sequential([tf.keras.layers.Dense(1, input_shape=(1,))])
  model.compile(loss='mse', optimizer='sgd')

# Call model.fit and model.evaluate as before.
dataset = tf.data.Dataset.from_tensors(([1.], [1.])).repeat(100).batch(10)
model.fit(dataset, epochs=2)
model.evaluate(dataset)

Custom training loop with TPUs.

# Create the strategy instance.
tpu_strategy = tf.distribute.TPUStrategy(resolver)


# Create the keras model under strategy.scope()
with tpu_strategy.scope():
  model = keras.layers.Dense(1, name="dense")

# Create custom training loop body as tf.function.
@tf.function
def train_step(iterator):
  def step_fn(inputs):
    images, targets = inputs
    with tf.GradientTape() as tape:
      outputs = model(images)
      loss = tf.reduce_sum(outputs - targets)
    grads = tape.gradient(loss, model.variables)
    return grads

  return tpu_strategy.run(
      step_fn, args=(next(iterator),))

# Run the loop body once on at dataset.
dataset = tf.data.Dataset.from_tensors(([1.], [1.])).repeat(100).batch(10
input_iterator = iter(tpu_strategy.experimental_distribute_dataset(dataset))
train_step(input_iterator)

Testing

Tests here should cover all distribution strategies to ensure feature parity. This can be done using the test decorators in strategy_combinations.py.