Changed methods:
- adapt(data, batch_size=None, steps=None, reset_state)
Added methods:
- update_state
- merge_state
- finalize_state
- compile
- make_adapt_function
Reimplements adapt on top of existing Model.fit utilities.
In follow-up changes, each subclass will be migrated to use the new API methods
directly, and the CombinerPreprocessingLayer class will be removed.
PiperOrigin-RevId: 352102154
Change-Id: If993c07e68b6652896010a25a21dad0560e93329
We already deprecated them, so they should no longer be used at all. We have CI failing as it picks a py3.5 default. Subsequent changes will fix broken CI jobs.
PiperOrigin-RevId: 352044870
Change-Id: I50a62d322ec05781beea78a27704ee7849c277e1
This argument will allow specifying a value which should always map to the zero
index. For now, this will only be supported for a single tensor input as the
desired behavior when crossing multiple inputs is unclear.
PiperOrigin-RevId: 351904657
Change-Id: I8ae3fd88ef94f7b1244cd1a6da7adbc2a40dfef1
Since caller scripts use Bazelisk, the docker job is broken if we want to use a bazel that is no longer the default one.
PiperOrigin-RevId: 351891433
Change-Id: I6caf0b5940934a3a90737f7fae288c42953a23f1
This enables a new mode of reading from the tf.data service, where consumers read from tasks in a coordinated fashion, instead of the normal first-come first-served.
The main use case for this is coordinated bucketization for synchronous training, where we want to ensure that at each step consumers get batches with elements of similar sizes. This mitigates the inefficiency of some consumers slowly training on large examples while others quickly train on small examples, then block waiting for the slower examples to be processed.
When `consumer_index` and `num_consumers` are specified to `distribute`, each task will enforce a strict round-robin order, where its first element goes to consumer 0, second element to consumer 1, and so on. This requires that all consumers consume the same number of elements.
PiperOrigin-RevId: 351625063
Change-Id: I9b400f55ad61406cb125af8225096e7ff5dc4b0c
Caching the variable scope causes the layer to be "poisoned" when used within a tf.function, since if the layer is called for the first time inside a tf.function, then a FuncGraph scope is captured and then re-entered on every subsequent call. This caching was simply a graph-building (Python) performance optimization and can be skipped if Eager is enabled.
PiperOrigin-RevId: 351286068
Change-Id: Ia82958b8fca83bac36f6bc3ce4dd08a8e5011ca0
To enable optimization with sparsity, use:
converter.optimizations = [tf.lite.Optimize.SPARSITY]
converter.convert()
Note:
1) This feature is experimental
2) It requires the use of during training pruning to be effective.
3) Not all kernels have been optimized for sparse execution, so the initial benefit will primarily be in the model size on disk.
PiperOrigin-RevId: 351245576
Change-Id: I2a771f2a5ead92bdf93821af9f8058b0957b5aef
All strategies are supported except for CentralStorageStrategy and ParameterServerStrategy.
This CL also removes the CompositeTensor superclass from Generator. Generator is a wrapper around tf.Variable, and because tf.Variable is not a CompositeTensor, Generator can't be a CompositeTensor in theory. Previously we made it a CompositeTensor by returning Variable.handle, but that breaks down when the variable is a DistributedVariable (in cross-replica context).
PiperOrigin-RevId: 350851648
Change-Id: I5f4d77ddb990557fcc9c7336987203ecdaec5b9a
Remove from the API the following experimental learning rate schedules: LinearCosineDecay, NoisyLinearCosineDecay.
PiperOrigin-RevId: 350704711
Change-Id: Iebe9bc0eff38f79684e8d2f030fd838d06176494
1. Suggest the typical `call()` method signature for base layer, make it clear that *args and **kwargs are not needed unless necessary.
2. Add *args to the `call()` method signature to make it align with the docstring.
3. Add reference to the tf guide for custom layer from the docstring.
PiperOrigin-RevId: 350499593
Change-Id: I4066d335a4a337cf7923bffc1e4f090f15e3f30f
mlir.convert_* are experimental, testing interface that returns a textual version of input function/graph. Enable dumping location information as well.
PiperOrigin-RevId: 350450836
Change-Id: Ifc9d3377b7ceee186cd09b010ce4fd4371607e81
Since the implementation between tf.initializer and keras.initializer are just duplicates, copy the function to keras so that it is standalone. This allows us to freely delete the code in tf if preferred. Also update the build dependency to be more explicit.
PiperOrigin-RevId: 349305978
Change-Id: Ic49a160037a5a0a77bc8826c597ffd7fbeaa5011