Update the release notes with information about tf.data.

Also adds a short porting guide to the tf.contrib.data README.

PiperOrigin-RevId: 171097798
This commit is contained in:
Derek Murray 2017-10-04 19:07:31 -07:00 committed by TensorFlower Gardener
parent 2ae5bfce55
commit f6e187acdd
2 changed files with 48 additions and 4 deletions
RELEASE.md
tensorflow/contrib/data

View File

@ -1,6 +1,16 @@
# Release 1.4.0
## Major Features And Improvements
* `tf.data` is now part of the core TensorFlow API.
* The API is now subject to backwards compatibility guarantees.
* For a guide to migrating from the `tf.contrib.data` API, see the
[README] (https://github.com/tensorflow/tensorflow/blob/r1.4/tensorflow/contrib/data/README.md).
* Major new features include `Dataset.from_generator()` (for building an input
pipeline from a Python generator), and the `Dataset.apply()` method for
applying custom transformation functions.
* Several custom transformation functions have been added, including
`tf.contrib.data.batch_and_drop_remainder()` and
`tf.contrib.data.sloppy_interleave()`.
* Java:
* Generics (e.g., `Tensor<Integer>`) for improved type-safety (courtesy @andrewcmyers).
* Support for multi-dimensional string tensors.
@ -16,6 +26,11 @@
flexible and reproducible package, is available via the new
`tf.contrib.data.Dataset.from_generator` method!
## Breaking Changes to the API
* The signature of the `tf.contrib.data.rejection_resample()` function has been
changed. It now returns a function that can be used as an argument to
`Dataset.apply()`.
# Release 1.3.0
See also [TensorBoard 0.1.4](https://github.com/tensorflow/tensorboard/releases/tag/0.1.4) release notes.

View File

@ -2,9 +2,38 @@
=====================
NOTE: The `tf.contrib.data` module has been deprecated. Use `tf.data` instead.
We are continuing to support existing code using the `tf.contrib.data` APIs in
the current version of TensorFlow, but will eventually remove support. The
`tf.data` APIs are subject to backwards compatibility guarantees.
This directory contains the Python API for the `tf.contrib.data.Dataset` and
`tf.contrib.data.Iterator` classes, which can be used to build input pipelines.
Porting your code to `tf.data`
------------------------------
The documentation for `tf.data` API has moved to the programmers'
guide, [here](../../docs_src/programmers_guide/datasets.md).
The `tf.contrib.data.Dataset` class has been renamed to `tf.data.Dataset`, and
the `tf.contrib.data.Iterator` class has been renamed to `tf.data.Iterator`.
Most code can be ported by removing `.contrib` from the names of the classes.
However, there are some small differences, which are outlined below.
The arguments accepted by the `Dataset.map()` transformation have changed:
* `dataset.map(..., num_threads=T)` is now `dataset.map(num_parallel_calls=T)`.
* `dataset.map(..., output_buffer_size=B)` is now
`dataset.map(...).prefetch(B).
Some transformations have been removed from `tf.data.Dataset`, and you must
instead apply them using `Dataset.apply()` transformation. The full list of
changes is as follows:
* `dataset.dense_to_sparse_batch(...)` is now
`dataset.apply(tf.contrib.data.dense_to_sparse_batch(...)`.
* `dataset.enumerate(...)` is now
`dataset.apply(tf.contrib.data.enumerate_dataset(...))`.
* `dataset.group_by_window(...)` is now
`dataset.apply(tf.contrib.data.group_by_window(...))`.
* `dataset.ignore_errors()` is now
`dataset.apply(tf.contrib.data.ignore_errors())`.
* `dataset.unbatch()` is now `dataset.apply(tf.contrib.data.unbatch())`.
The `Dataset.make_dataset_resource()` and `Iterator.dispose_op()` methods have
been removed from the API. Please open a GitHub issue if you have a need for
either of these.