Delete merged docs

`versioning_semantics` and `data_versioning` have been into `version_compat`

Also fix all links to the old docs.

PiperOrigin-RevId: 161203112
This commit is contained in:
Mark Daoust 2017-07-07 08:15:14 -07:00 committed by TensorFlower Gardener
parent 97617f139f
commit aee58d0720
4 changed files with 5 additions and 302 deletions

View File

@ -1100,7 +1100,7 @@ In general, changes to existing, checked-in specifications must be
backwards-compatible: changing the specification of an op must not break prior
serialized `GraphDef` protocol buffers constructed from older specifications.
The details of `GraphDef` compatibility are
@{$version_semantics#graphs$described here}.
@{$version_compat#compatibility_of_graphs_and_checkpoints$described here}.
There are several ways to preserve backwards-compatibility.
@ -1150,7 +1150,7 @@ callers. The Python API may be kept compatible by careful changes in a
hand-written Python wrapper, by keeping the old signature except possibly adding
new optional arguments to the end. Generally incompatible changes may only be
made when TensorFlow's changes major versions, and must conform to the
@{$version_semantics#graphs$`GraphDef` version semantics}.
@{$version_compat#compatibility_of_graphs_and_checkpoints$`GraphDef` version semantics}.
### GPU Support {#gpu-support}

View File

@ -1,132 +0,0 @@
# TensorFlow Data Versioning: GraphDefs and Checkpoints
As described in
@{$version_semantics#compatibility-for-graphs-and-checkpoints$Compatibility for Graphs and Checkpoints},
TensorFlow marks each kind of data with version information in order to maintain
backward compatibility. This document provides additional details about the
versioning mechanism, and how to use it to safely change data formats.
## Backward and partial forward compatibility
The two core artifacts exported from and imported into TensorFlow are
checkpoints (serialized variable states) and `GraphDef`s (serialized computation
graphs). Any approach to versioning these artifacts must take into account the
following requirements:
* **Backward compatibility** to support loading `GraphDefs` created with older
versions of TensorFlow.
* **Forward compatibility** to support scenarios where the producer of a
`GraphDef` is upgraded to a newer version of TensorFlow before the consumer.
* Enable evolving TensorFlow in incompatible ways. For example, removing Ops,
adding attributes, and removing attributes.
For `GraphDef`s, backward compatibility is enforced within a major version. This
means functionality can only be removed between major versions. Forward
compatibility is enforced within Patch releases (1.x.1 -> 1.x.2, for example).
In order to achieve backward and forward compatibility as well as know when to
enforce changes in formats, the serialized representations of graphs and
variable state need to have metadata that describes when they were produced. The
sections below detail the TensorFlow implementation and guidelines for evolving
`GraphDef` versions.
### Independent data version schemes
There are data versions for `GraphDef`s and checkpoints. Both data formats
evolve at different rates, and also at different speeds than the version of
TensorFlow. Both versioning systems are defined in
[`core/public/version.h`](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/public/version.h).
Whenever a new version is added a note is added to the header detailing what
changed and the date.
### Data, producers, and consumers
This section discusses version information for **data**, binaries that produce
data (**producers**), and binaries that consume data (**consumers**):
* Producer binaries have a version (`producer`) and a minimum consumer version
that they are compatible with (`min_consumer`).
* Consumer binaries have a version (`consumer`) and a minimum producer version
that they are compatible with (`min_producer`).
* Each piece of versioned data has a [`VersionDef
versions`](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/framework/versions.proto)
field which records the `producer` that made the data, the `min_consumer`
that it is compatible with, and a list of `bad_consumers` versions that are
disallowed.
By default, when a producer makes some data, the data inherits the producer's
`producer` and `min_consumer` versions. `bad_consumers` can be set if specific
consumer versions are known to contain bugs and must be avoided. A consumer can
accept a piece of data if
* `consumer` >= data's `min_consumer`
* data's `producer` >= consumer's `min_producer`
* `consumer` not in data's `bad_consumers`
Since both producers and consumers come from the same TensorFlow code base,
[`core/public/version.h`](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/public/version.h)
contains a main binary version which is treated as either `producer` or
`consumer` depending on context and both `min_consumer` and `min_producer`
(needed by producers and consumers, respectively). Specifically,
* For `GraphDef` versions, we have `TF_GRAPH_DEF_VERSION`,
`TF_GRAPH_DEF_VERSION_MIN_CONSUMER`, and
`TF_GRAPH_DEF_VERSION_MIN_PRODUCER`.
* For checkpoint versions, we have `TF_CHECKPOINT_VERSION`,
`TF_CHECKPOINT_VERSION_MIN_CONSUMER`, and
`TF_CHECKPOINT_VERSION_MIN_PRODUCER`.
### Evolving GraphDef versions
This section presents examples of using this versioning mechanism to make
changes to the `GraphDef` format.
**Adding a new Op:**
1. Add the new Op to both consumers and producers at the same time, and do not
change any `GraphDef` versions. This type of change is automatically
backward compatible, and does not impact forward compatibility plan since
existing producer scripts will not suddenly use the new functionality.
**Adding a new Op and switching existing Python wrappers to use it:**
1. Implement new consumer functionality and increment the binary version.
2. If it is possible to make the wrappers use the new functionality only in
cases that did not work before, the wrappers can be updated now.
3. Change Python wrappers to use the new functionality. Do not increment
`min_consumer`, since models which do not use this Op should not break.
**Removing an Op or restricting the functionality of an Op:**
1. Fix all producer scripts (not TensorFlow itself) to not use the banned Op or
functionality.
2. Increment the binary version and implement new consumer functionality that
bans the removed Op or functionality for GraphDefs at the new version and
above. If possible, make TensorFlow stop producing `GraphDefs` with the
banned functionality. This can be done with
[`REGISTER_OP(...).Deprecated(deprecated_at_version,
message)`](https://github.com/tensorflow/tensorflow/blob/b289bc7a50fc0254970c60aaeba01c33de61a728/tensorflow/core/ops/array_ops.cc#L1009).
3. Wait for a major release for backward compatibility purposes.
4. Increase `min_producer` to the GraphDef version from (2) and remove the
functionality entirely.
**Changing the functionality of an Op:**
1. Add a new similar Op named `SomethingV2` or similar and go through the
process of adding it and switching existing Python wrappers to use it (may
take 3 weeks if forward compatibility is desired).
2. Remove the old Op (Can only take place with a major version change due to
backward compatibility).
3. Increase `min_consumer` to rule out consumers with the old Op, add back the
old Op as an alias for `SomethingV2`, and go through the process to switch
existing Python wrappers to use it.
4. Go through the process to remove `SomethingV2`.
**Banning a single consumer version that cannot run safely:**
1. Bump the binary version and add the bad version to `bad_consumers` for all
new GraphDefs. If possible, add to `bad_consumers` only for GraphDefs which
contain a certain Op or similar.
2. If existing consumers have the bad version, push them out as soon as
possible.

View File

@ -40,14 +40,10 @@ documented in the following guide:
* @{$saved_model_cli$SavedModel CLI (Command-Line Interface)}.
To learn about the TensorFlow versioning scheme, consult the following two
guides:
To learn about the TensorFlow versioning scheme consult:
* @{$version_semantics$TensorFlow Version Semantics}, which explains
TensorFlow's versioning nomenclature and compatibility rules.
* @{$data_versions$TensorFlow Data Versioning: GraphDefs and Checkpoints},
which explains how TensorFlow adds versioning information to computational
graphs and checkpoints in order to support compatibility across versions.
* @{$version_compat$The TensorFlow Version Compatibility Guide}, which explains
TensorFlow's versioning nomenclature and compatibility rules.
We conclude this section with a FAQ about TensorFlow programming:

View File

@ -1,161 +0,0 @@
# TensorFlow Version Semantics
## Semantic Versioning 2.0
TensorFlow follows Semantic Versioning 2.0 ([semver](http://semver.org)) for its
public API. Each release version of TensorFlow has the form `MAJOR.MINOR.PATCH`.
Changes to the each number have the following meaning:
* **MAJOR**: Backwards incompatible changes. Code and data that worked with
a previous major release will not necessarily work with a new release.
However, in some cases existing TensorFlow data (graphs, checkpoints, and
other protobufs) may be migratable to the newer release; see below for details
on data compatibility.
* **MINOR**: Backwards compatible features, speed improvements, etc. Code and
data that worked with a previous minor release *and* which depends only the
public API will continue to work unchanged. For details on what is and is
not the public API, see below.
* **PATCH**: Backwards compatible bug fixes.
## What is covered
Only the public APIs of TensorFlow are backwards compatible across minor and
patch versions. The public APIs consist of
* The documented public [Python](../api_docs/python) API, excluding `tf.contrib`.
This includes all public functions and classes (whose names do not start with
`_`) in the tensorflow module and its submodules. Note that the code in
the `examples/` to `tools/` directories is not reachable through the
tensorflow Python module and is thus not covered by the compatibility
guarantee.
If a symbol is available through the tensorflow Python module or its
submodules, but is not documented, then it is _not_ considered part of the
public API.
* The [C API](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/c/c_api.h).
* The following protocol buffer files:
[`attr_value`](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/framework/attr_value.proto),
[`config`](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/protobuf/config.proto),
[`event`](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/util/event.proto),
[`graph`](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/framework/graph.proto),
[`op_def`](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/framework/op_def.proto),
[`reader_base`](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/kernels/reader_base.proto),
[`summary`](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/framework/summary.proto),
[`tensor`](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/framework/tensor.proto),
[`tensor_shape`](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/framework/tensor_shape.proto),
and [`types`](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/framework/types.proto).
## What is *not* covered
Some API functions are explicitly marked as "experimental" and can change in
backward incompatible ways between minor releases. These include:
* **Experimental APIs**: The @{tf.contrib} module and its submodules in Python
and any functions in the C API or fields in protocol buffers that are
explicitly commented as being experimental.
* **Other languages**: TensorFlow APIs in languages other than Python and C,
such as:
- @{$cc/guide$C++} (exposed through header files in
[`tensorflow/cc`](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/cc)).
- [Java](../api_docs/java/reference/org/tensorflow/package-summary), and
- [Go](https://godoc.org/github.com/tensorflow/tensorflow/tensorflow/go)
* **Details of composite ops:** Many public functions in Python expand to
several primitive ops in the graph, and these details will be part of any
graphs saved to disk as `GraphDef`s. These details are allowed to change for
minor releases. In particular, regressions tests that check for exact
matching between graphs are likely to break across minor releases, even
though the behavior of the graph should be unchanged and existing
checkpoints will still work.
* **Floating point numerical details:** The specific floating point values
computed by ops may change at any time: users should rely only on
approximate accuracy and numerical stability, not on the specific bits
computed. Changes to numerical formulas in minor and patch releases should
result in comparable or improved accuracy, with the caveat that in machine
learning improved accuracy of specific formulas may result in worse accuracy
for the overall system.
* **Random numbers:** The specific random numbers computed by the
@{$python/constant_op#Random_Tensors$random ops} may change at any time:
users should rely only on approximately correct distributions and
statistical strength, not the specific bits computed. However, we will make
changes to random bits rarely and ideally never for patch releases, and all
such intended changes will be documented.
* **Distributed Tensorflow:** Running 2 different versions of TensorFlow in a
single cluster is unsupported. There are no guarantees about backwards
compatibility of the wire protocol.
* **Bugs:** We reserve the right to make backwards incompatible behavior
(though not API) changes if the current implementation is clearly broken,
i.e., if it is contradicting the documentation, or if a well-known and
well-defined intended behavior is not properly implemented due to a bug.
For example, if an optimizer claims to implement a well-known optimization
algorithm but, due to a bug, does not match that algorithm we will fix the
optimizer. This may break code relying on the wrong behavior for
convergence. We will note such changes in the release notes.
* **Error messages:** We reserve the right to change the text of error
messages. In addition, the type of an error may change unless the type is
specified in the documentation. For example, a function that says in some
condition it will raise an `InvalidArgument` exception, it will continue to
raise `InvalidArgument`, but the human-readable message contents can change.
Furthermore, any API methods marked "deprecated" in the 1.0 release can
be deleted in any subsequent minor release.
## Compatibility for Graphs and Checkpoints
Many users of TensorFlow will be saving graphs and trained models to disk for
later evaluation or more training, often changing versions of TensorFlow in the
process. First, following semver, any graph or checkpoint written out with one
version of TensorFlow can be loaded and evaluated with a later version of
TensorFlow with the same major release. However, we will endeavor to preserve
backwards compatibility even across major releases when possible, so that the
serialized files are usable over long periods of time.
There are two main classes of saved TensorFlow data: graphs and checkpoints.
Graphs describe the data flow graphs of ops to be run during training and
inference, and checkpoints contain the saved tensor values of variables in a
graph.
Graphs are serialized via the `GraphDef` protocol buffer. To facilitate (rare)
backwards incompatible changes to graphs, each `GraphDef` has an integer version
separate from the TensorFlow version. The semantics are:
* Each version of TensorFlow supports an interval of `GraphDef` versions. This
interval with be constant across patch releases, and will only grow across
minor releases. Dropping support for a `GraphDef` version will only occur
for a major release of TensorFlow.
* Newly created graphs use the newest `GraphDef` version.
* If a given version of TensorFlow supports the `GraphDef` version of a graph,
it will load and evaluate with the same behavior as when it was written out
(except for floating point numerical details and random numbers), regardless
of the major version of TensorFlow. In particular, all checkpoint files will
be compatible.
* If the `GraphDef` upper bound is increased to X in a (minor) release, there
will be at least six months before the lower bound is increased to X.
For example (numbers and versions hypothetical), TensorFlow 1.2 might support
`GraphDef` versions 4 to 7. TensorFlow 1.3 could add `GraphDef` version 8 and
support versions 4 to 8. At least six months later, TensorFlow 2.0.0 could drop
support for versions 4 to 7, leaving version 8 only.
Finally, when support for a `GraphDef` version is dropped, we will attempt to
provide tools for automatically converting graphs to a newer supported
`GraphDef` version.
For developer-level details about `GraphDef` versioning, including how to evolve
the versions to account for changes, see
@{$data_versions$TensorFlow Data Versioning}.