Small typo fixes.

Change: 140389905
This commit is contained in:
A. Unique TensorFlower 2016-11-28 12:44:54 -08:00 committed by TensorFlower Gardener
parent 641281657b
commit e83d3f68ac
8 changed files with 25 additions and 31 deletions

View File

@ -244,7 +244,7 @@ packages needed by TensorFlow.
* Activate the conda environment and install TensorFlow in it.
* After the install you will activate the conda environment each time you
want to use TensorFlow.
* Optionally install ipython and other packages into the conda environment
* Optionally install ipython and other packages into the conda environment.
Install Anaconda:

View File

@ -37,7 +37,7 @@ any [attrs](#attrs) the Op might require.
To see how this works, suppose you'd like to create an Op that takes a tensor of
`int32`s and outputs a copy of the tensor, with all but the first element set to
zero. Create file [`tensorflow/core/user_ops`][user_ops]`/zero_out.cc` and
zero. Create file `tensorflow/core/user_ops/zero_out.cc` and
add a call to the `REGISTER_OP` macro that defines the interface for such an Op:
```c++
@ -321,11 +321,10 @@ using the `Attr` method, which expects a spec of the form:
where `<name>` begins with a letter and can be composed of alphanumeric
characters and underscores, and `<attr-type-expr>` is a type expression of the
form [described below](#attr-types)
form [described below](#attr-types).
For example, if you'd like the `ZeroOut` Op to preserve a user-specified index,
instead of only the 0th element, you can register the Op like so:
<code class="lang-c++"><pre>
REGISTER\_OP("ZeroOut")
<b>.Attr("preserve\_index: int")</b>
@ -335,7 +334,6 @@ REGISTER\_OP("ZeroOut")
Your kernel can then access this attr in its constructor via the `context`
parameter:
<code class="lang-c++"><pre>
class ZeroOutOp : public OpKernel {
public:
@ -357,7 +355,6 @@ class ZeroOutOp : public OpKernel {
</pre></code>
which can then be used in the `Compute` method:
<code class="lang-c++"><pre>
void Compute(OpKernelContext\* context) override {
// ...
@ -512,7 +509,6 @@ you would then register an `OpKernel` for each supported type.
For instance, if you'd like the `ZeroOut` Op to work on `float`s
in addition to `int32`s, your Op registration might look like:
<code class="lang-c++"><pre>
REGISTER\_OP("ZeroOut")
<b>.Attr("T: {float, int32}")</b>
@ -632,7 +628,6 @@ REGISTER\_KERNEL\_BUILDER(
> </pre></code>
Lets say you wanted to add more types, say `double`:
<code class="lang-c++"><pre>
REGISTER\_OP("ZeroOut")
<b>.Attr("T: {float, <b>double,</b> int32}")</b>
@ -643,7 +638,6 @@ REGISTER\_OP("ZeroOut")
Instead of writing another `OpKernel` with redundant code as above, often you
will be able to use a C++ template instead. You will still have one kernel
registration (`REGISTER_KERNEL_BUILDER` call) per overload.
<code class="lang-c++"><pre>
<b>template &lt;typename T&gt;</b>
class ZeroOutOp : public OpKernel {

View File

@ -33,9 +33,9 @@ with tf.name_scope('hidden') as scope:
This results in the following three op names:
* *hidden*/alpha
* *hidden*/weights
* *hidden*/biases
* `hidden/alpha`
* `hidden/weights`
* `hidden/biases`
By default, the visualization will collapse all three into a node labeled `hidden`.
The extra detail isn't lost. You can double-click, or click
@ -253,7 +253,7 @@ The images below show the CIFAR-10 model with tensor shape information:
Often it is useful to collect runtime metadata for a run, such as total memory
usage, total compute time, and tensor shapes for nodes. The code example below
is a snippet from the train and test section of a modification of the
[simple MNIST tutorial](http://tensorflow.org/tutorials/mnist/beginners/index.md),
[simple MNIST tutorial](../../tutorials/mnist/beginners/index.md),
in which we have recorded summaries and runtime statistics. See the [Summaries Tutorial](../../how_tos/summaries_and_tensorboard/index.md#serializing-the-data)
for details on how to record summaries.
Full source is [here](https://www.tensorflow.org/code/tensorflow/examples/tutorials/mnist/mnist_with_summaries.py).

View File

@ -29,22 +29,22 @@ be set:
set this environment variable by running:
```shell
source $HADOOP_HOME/libexec/hadoop-config.sh
source ${HADOOP_HOME}/libexec/hadoop-config.sh
```
* **LD_LIBRARY_PATH**: To include the path to libjvm.so. On Linux:
```shell
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$JAVA_HOME/jre/lib/amd64/server
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:${JAVA_HOME}/jre/lib/amd64/server
```
* **CLASSPATH**: The Hadoop jars must be added prior to running your
TensorFlow program. The CLASSPATH set by
`$HADOOP_HOME/libexec/hadoop-config.sh` is insufficient. Globs must be
`${HADOOP_HOME}/libexec/hadoop-config.sh` is insufficient. Globs must be
expanded as described in the libhdfs documentation:
```shell
CLASSPATH=$($HADOOP_HDFS_HOME/bin/hadoop classpath --glob) python your_script.py
CLASSPATH=$($HADOOP_HDFS_HOME}/bin/hadoop classpath --glob) python your_script.py
```
If you are running [Distributed TensorFlow](../distributed/index.md), then all

View File

@ -28,7 +28,7 @@ creating these operations.
Now that you have a bit of a feel for queues, let's dive into the details...
## Queue Use Overview
## Queue use overview
Queues, such as `FIFOQueue` and `RandomShuffleQueue`, are important TensorFlow
objects for computing tensors asynchronously in a graph.
@ -149,7 +149,7 @@ coord.request_stop()
coord.join(enqueue_threads)
```
## Handling Exceptions
## Handling exceptions
Threads started by queue runners do more than just run the enqueue ops. They
also catch and handle exceptions generated by queues, including

View File

@ -69,7 +69,7 @@ def my_image_filter(input_images, variables_dict):
strides=[1, 1, 1, 1], padding='SAME')
return tf.nn.relu(conv2 + variables_dict["conv2_biases"])
# The 2 calls to my_image_filter() now use the same variables
# Both calls to my_image_filter() now use the same variables
result1 = my_image_filter(image1, variables_dict)
result2 = my_image_filter(image2, variables_dict)
```
@ -90,7 +90,7 @@ while constructing a graph.
## Variable Scope Example
Variable Scope mechanism in TensorFlow consists of 2 main functions:
Variable Scope mechanism in TensorFlow consists of two main functions:
* `tf.get_variable(<name>, <shape>, <initializer>)`:
Creates or returns a variable with a given name.
@ -280,9 +280,9 @@ when opening a new variable scope.
```python
with tf.variable_scope("foo") as foo_scope:
v = tf.get_variable("v", [1])
with tf.variable_scope(foo_scope)
with tf.variable_scope(foo_scope):
w = tf.get_variable("w", [1])
with tf.variable_scope(foo_scope, reuse=True)
with tf.variable_scope(foo_scope, reuse=True):
v1 = tf.get_variable("v", [1])
w1 = tf.get_variable("w", [1])
assert v1 is v
@ -296,7 +296,7 @@ different one. This is fully independent of where we do it.
```python
with tf.variable_scope("foo") as foo_scope:
assert foo_scope.name == "foo"
with tf.variable_scope("bar")
with tf.variable_scope("bar"):
with tf.variable_scope("baz") as other_scope:
assert other_scope.name == "bar/baz"
with tf.variable_scope(foo_scope) as foo_scope2:

View File

@ -35,7 +35,7 @@ File | What's in it?
`models/rnn/translate/translate.py` | Binary that trains and runs the translation model.
## Sequence-to-Sequence Basics
## Sequence-to-sequence basics
A basic sequence-to-sequence model, as introduced in
[Cho et al., 2014](http://arxiv.org/abs/1406.1078)
@ -69,7 +69,7 @@ attention mechanism in the decoder looks like this.
<img style="width:100%" src="../../images/attention_seq2seq.png" />
</div>
## TensorFlow seq2seq Library
## TensorFlow seq2seq library
As you can see above, there are many different sequence-to-sequence
models. Each of these models can use different RNN cells, but all
@ -148,7 +148,7 @@ more sequence-to-sequence models in `seq2seq.py`, take a look there. They all
have similar interfaces, so we will not describe them in detail. We will use
`embedding_attention_seq2seq` for our translation model below.
## Neural Translation Model
## Neural translation model
While the core of the sequence-to-sequence model is constructed by
the functions in `python/ops/seq2seq.py`, there are still a few tricks
@ -238,7 +238,7 @@ with encoder inputs representing `[PAD PAD "." "go" "I"]` and decoder
inputs `[GO "Je" "vais" "." EOS PAD PAD PAD PAD PAD]`.
## Let's Run It
## Let's run it
To train the model described above, we need to a large English-French corpus.
We will use the *10^9-French-English corpus* from the
@ -312,7 +312,7 @@ Reading model parameters from /tmp/translate.ckpt-340000
Qui est le président des États-Unis ?
```
## What Next?
## What next?
The example above shows how you can build your own English-to-French
translator, end-to-end. Run it and see how the model performs for yourself.

View File

@ -102,7 +102,7 @@ $$
\begin{align}
P(w_t | h) &= \text{softmax}(\text{score}(w_t, h)) \\
&= \frac{\exp \{ \text{score}(w_t, h) \} }
{\sum_\text{Word w' in Vocab} \exp \{ \text{score}(w', h) \} }.
{\sum_\text{Word w' in Vocab} \exp \{ \text{score}(w', h) \} }
\end{align}
$$
@ -115,7 +115,7 @@ $$
\begin{align}
J_\text{ML} &= \log P(w_t | h) \\
&= \text{score}(w_t, h) -
\log \left( \sum_\text{Word w' in Vocab} \exp \{ \text{score}(w', h) \} \right)
\log \left( \sum_\text{Word w' in Vocab} \exp \{ \text{score}(w', h) \} \right).
\end{align}
$$