Address feedback
Add test for the python method has_atomic_move
Removed old comment, and fixed indentation
Remove unncessary imports
Remove the test which checks for reference cycles when saving. Since the check for file system introduces a conditional op, it introduces a reference cycle, and this check does not apply anymore
Fighting lint
Fix lint errors
Use returned status of hasAtomicMove
Also adding support for saving iterators in a sharded fashion to avoid unnecessary copying during checkpointing.
PiperOrigin-RevId: 286310419
Change-Id: I1a957af783f7f69753992ce220b59eb43df2c02f
If some checkpoints present in CheckpointState are absent on disk, recover_last_checkpoints incorrectly initializes Saver internal state.
In this example:
(1) CheckpointState.all_model_checkpoint_paths = ['ckpt-1', 'ckpt-2', 'ckpt-3']
(2) Actual checkpoints on disk: ['ckpt-2', 'ckpt-3']
last_checkpoints gets incorrectly initialized to ['ckpt-1', 'ckpt-2']. This is because get_checkpoint_mtimes silently ignores any absent checkpoints and returns a list of length 2 corresponding to checkpoints on disk, which then gets zipped with (1). After the fix, last_checkpoints would be ['ckpt-2', 'ckpt-3'].
PiperOrigin-RevId: 245983586
Pulls some utilities out of saver.py which are necessary to actually use it. The functional saver takes only SaveableObjects, so these are utilities for taking a list of whatever users pass in and converting them to those.
One other code move for object-based checkpointing to avoid circular imports.
Applications which need a SaverDef still use the old Saver. Serialization to SaverDef will be added to this saver in a followup.
Does not actually wrap the new Saver's methods in @tf.function yet, since there are memory issues which need to be fixed first.
PiperOrigin-RevId: 224561069
Saver will be replaced by tf.train.Checkpoint (and tf.contrib.checkpoint.CheckpointManager) for training checkpoints, and by a simple Python representation of a SaverDef (which may not be a public symbol).
tf.train.Checkpoint does not write/merge sharded checkpoints at the moment, so v2 will want a solution for that (tf.train.ShardedCheckpoint?).
MetaGraph import and export will be replaced by object-based tf.saved_model.import/tf.saved_model.export.
PiperOrigin-RevId: 218262301
This change contains no code changes. Only doc-strings.
We can't use relative links in code files, so we don't have much choice but to link to tensorflow.org/
The deleted links were to docs that no longer exist.
PiperOrigin-RevId: 209019572
Pure refactor, in preparation for adding a higher level checkpoint management utility. This utility will also need to work with the Checkpoint proto, and globbing it on to saver.py seems dirty.
PiperOrigin-RevId: 207179646
This change explicitly declares import_scope as a kwarg for tf.saved_model.loader.load. Previously, tf.saved_model.loader.load implicitly accepted import_scope and passed it through to import_meta_graph through **saver_kwargs.
PiperOrigin-RevId: 200249417
Revert #18413. Too many internal test failures due to the name scope change caused by this change.
Revert #18192. Cannot use re2::StringPiece internally. Need alternative for set call. Will pull and clean this up in a separate change.
PiperOrigin-RevId: 197991247
Need to add some new checkpointable files in core (specifically I had some checkpointable data structures in mind), and prefixing more files with "checkpointable_" in python/training/ seems dirty.
No functional changes, just some branching and build/import fiddling.
PiperOrigin-RevId: 196883136
Allows SaveableObjects to specify feed dict addition callbacks for object-based saving.
For now just saves get_config() with Layers. Doesn't do any loading, and there isn't quite enough information to reconstruct a Model yet (needs topology).
My plan is to get Models to the point where they can be reconstructed from object-based checkpoints (probably one more change), add in SavedModel export (assuming no dynamic control flow for now), then add this "SavedModel+Python" format to Model.save / load_model.
PiperOrigin-RevId: 196043183
Pulls a couple build rules out of tensorflow/python:training. I'd like to use a SaveableObject in :checkpointable (for saving some Python state by default), which means the file with SaveableObject has to be essientially dependency-free.
PiperOrigin-RevId: 194473987
Previously exposed as tf.contrib.eager.Checkpoint / tfe.Checkpoint.
Spiffies up the documentation a bit, but otherwise just adds the export decorator.
Compatible in both directions with tf.train.Saver (object-based checkpoints can be fed to tf.train.Saver, and name-based checkpoints can be fed to tf.train.Checkpoint).
PiperOrigin-RevId: 193439442
This is the second part of the compatibility story. Object-based checkpointing APIs can already read name-based checkpoints, and now the name-based APIs can read object-based checkpoints by looking up the modified keys in the object graph proto.
PiperOrigin-RevId: 192824907