MultiWorkerMirroredStrategy. Possible choices: AUTO, RING - which uses `common_runtime/ring_reducer.{cc,h}`, and NCCL - which uses Nvidia NCCL for all-reduce. PiperOrigin-RevId: 236000699
17 lines
372 B
Plaintext
17 lines
372 B
Plaintext
path: "tensorflow.distribute.experimental.CollectiveCommunication"
|
|
tf_class {
|
|
is_instance: "<enum \'CollectiveCommunication\'>"
|
|
member {
|
|
name: "AUTO"
|
|
mtype: "<enum \'CollectiveCommunication\'>"
|
|
}
|
|
member {
|
|
name: "NCCL"
|
|
mtype: "<enum \'CollectiveCommunication\'>"
|
|
}
|
|
member {
|
|
name: "RING"
|
|
mtype: "<enum \'CollectiveCommunication\'>"
|
|
}
|
|
}
|