Skip to content

Commit

Permalink
修改
Browse files Browse the repository at this point in the history
  • Loading branch information
floydzhang315 committed Nov 10, 2015
1 parent 977c284 commit 5cb35a0
Show file tree
Hide file tree
Showing 6 changed files with 789 additions and 1 deletion.
293 changes: 293 additions & 0 deletions SOURCE/get_started/basic_usage.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,293 @@
# Basic Usage <a class="md-anchor" id="AUTOGENERATED-basic-usage"></a>

To use TensorFlow you need to understand how TensorFlow:

* Represents computations as graphs.
* Executes graphs in the context of `Sessions`.
* Represents data as tensors.
* Maintains state with `Variables`.
* Uses feeds and fetches to get data into and out of arbitrary operations.

## Overview <a class="md-anchor" id="AUTOGENERATED-overview"></a>

TensorFlow is a programming system in which you represent computations as
graphs. Nodes in the graph are called *ops* (short for operations). An op
takes zero or more `Tensors`, performs some computation, and produces zero or
more `Tensors`. A `Tensor` is a typed multi-dimensional array. For example,
you can represent a mini-batch of images as a 4-D array of floating point
numbers with dimensions `[batch, height, width, channels]`.

A TensorFlow graph is a *description* of computations. To compute anything,
a graph must be launched in a `Session`. A `Session` places the graph ops onto
`Devices`, such as CPUs or GPUs, and provides methods to execute them. These
methods return tensors produced by ops as [numpy](http://www.numpy.org)
`ndarray` objects in Python, and as `tensorflow::Tensor` instances in C and
C++.

## The computation graph <a class="md-anchor" id="AUTOGENERATED-the-computation-graph"></a>

TensorFlow programs are usually structured into a construction phase, that
assembles a graph, and an execution phase that uses a session to execute ops in
the graph.

For example, it is common to create a graph to represent and train a neural
network in the construction phase, and then repeatedly execute a set of
training ops in the graph in the execution phase.

TensorFlow can be used from C, C++, and Python programs. It is presently much
easier to use the Python library to assemble graphs, as it provides a large set
of helper functions not available in the C and C++ libraries.

The session libraries have equivalent functionalities for the three languages.

### Building the graph <a class="md-anchor" id="AUTOGENERATED-building-the-graph"></a>

To build a graph start with ops that do not need any input (source ops), such as
`Constant`, and pass their output to other ops that do computation.

The ops constructors in the Python library return objects that stand for the
output of the constructed ops. You can pass these to other ops constructors to
use as inputs.

The TensorFlow Python library has a *default graph* to which ops constructors
add nodes. The default graph is sufficient for many applications. See the
[Graph class](../api_docs/python/framework.md#Graph) documentation for how
to explicitly manage multiple graphs.

```python
import tensorflow as tf

# Create a Constant op that produces a 1x2 matrix. The op is
# added as a node to the default graph.
#
# The value returned by the constructor represents the output
# of the Constant op.
matrix1 = tf.constant([[3., 3.]])

# Create another Constant that produces a 2x1 matrix.
matrix2 = tf.constant([[2.],[2.]])

# Create a Matmul op that takes 'matrix1' and 'matrix2' as inputs.
# The returned value, 'product', represents the result of the matrix
# multiplication.
product = tf.matmul(matrix1, matrix2)
```

The default graph now has three nodes: two `constant()` ops and one `matmul()`
op. To actually multiply the matrices, and get the result of the multiplication,
you must launch the graph in a session.

### Launching the graph in a session <a class="md-anchor" id="AUTOGENERATED-launching-the-graph-in-a-session"></a>

Launching follows construction. To launch a graph, create a `Session` object.
Without arguments the session constructor launches the default graph.

See the [Session class](../api_docs/python/client.md#session-management) for
the complete session API.

```python
# Launch the default graph.
sess = tf.Session()

# To run the matmul op we call the session 'run()' method, passing 'product'
# which represents the output of the matmul op. This indicates to the call
# that we want to get the output of the matmul op back.
#
# All inputs needed by the op are run automatically by the session. They
# typically are run in parallel.
#
# The call 'run(product)' thus causes the execution of threes ops in the
# graph: the two constants and matmul.
#
# The output of the op is returned in 'result' as a numpy `ndarray` object.
result = sess.run(product)
print result
# ==> [[ 12.]]

# Close the Session when we're done.
sess.close()
```

Sessions should be closed to release resources. You can also enter a `Session`
with a "with" block. The `Session` closes automatically at the end of the
`with` block.

```python
with tf.Session() as sess:
result = sess.run([product])
print result
```

The TensorFlow implementation translates the graph definition into executable
operations distributed across available compute resources, such as the CPU or
one of your computer's GPU cards. In general you do not have to specify CPUs
or GPUs explicitly. TensorFlow uses your first GPU, if you have one, for as
many operations as possible.

If you have more than one GPU available on your machine, to use a GPU beyond
the first you must assign ops to it explicitly. Use `with...Device` statements
to specify which CPU or GPU to use for operations:

```python
with tf.Session() as sess:
with tf.device("/gpu:1"):
matrix1 = tf.constant([[3., 3.]])
matrix2 = tf.constant([[2.],[2.]])
product = tf.matmul(matrix1, matrix2)
...
```

Devices are specified with strings. The currently supported devices are:

* `"/cpu:0"`: The CPU of your machine.
* `"/gpu:0"`: The GPU of your machine, if you have one.
* `"/gpu:1"`: The second GPU of your machine, etc.

See [Using GPUs](../how_tos/using_gpu/index.md) for more information about GPUs
and TensorFlow.

## Interactive Usage <a class="md-anchor" id="AUTOGENERATED-interactive-usage"></a>

The Python examples in the documentation launch the graph with a
[`Session`](../api_docs/python/client.md#Session) and use the
[`Session.run()`](../api_docs/python/client.md#Session.run) method to execute
operations.

For ease of use in interactive Python environments, such as
[IPython](http://ipython.org) you can instead use the
[`InteractiveSession`](../api_docs/python/client.md#InteractiveSession) class,
and the [`Tensor.eval()`](../api_docs/python/framework.md#Tensor.eval) and
[`Operation.run()`](../api_docs/python/framework.md#Operation.run) methods. This
avoids having to keep a variable holding the session.

```python
# Enter an interactive TensorFlow Session.
import tensorflow as tf
sess = tf.InteractiveSession()

x = tf.Variable([1.0, 2.0])
a = tf.constant([3.0, 3.0])

# Initialize 'x' using the run() method of its initializer op.
x.initializer.run()

# Add an op to subtact 'a' from 'x'. Run it and print the result
sub = tf.sub(x, a)
print sub.eval()
# ==> [-2. -1.]
```

## Tensors <a class="md-anchor" id="AUTOGENERATED-tensors"></a>

TensorFlow programs use a tensor data structure to represent all data -- only
tensors are passed between operations in the computation graph. You can think
of a TensorFlow tensor as an n-dimensional array or list. A tensor has a
static type a rank, and a shape. To learn more about how TensorFlow handles
these concepts, see the [Rank, Shape, and Type](../resources/dims_types.md)
reference.

## Variables <a class="md-anchor" id="AUTOGENERATED-variables"></a>

Variables maintain state across executions of the graph. The following example
shows a variable serving as a simple counter. See
[Variables](../how_tos/variables/index.md) for more details.

```python
# Create a Variable, that will be initialized to the scalar value 0.
state = tf.Variable(0, name="counter")

# Create an Op to add one to `state`.

one = tf.constant(1)
new_value = tf.add(state, one)
update = tf.assign(state, new_value)

# Variables must be initialized by running an `init` Op after having

# launched the graph. We first have to add the `init` Op to the graph.
init_op = tf.initialize_all_variables()

# Launch the graph and run the ops.
with tf.Session() as sess:
# Run the 'init' op
sess.run(init_op)
# Print the initial value of 'state'
print sess.run(state)
# Run the op that updates 'state' and print 'state'.
for _ in range(3):
sess.run(update)
print sess.run(state)

# output:

# 0
# 1
# 2
# 3
```

The `assign()` operation in this code is a part of the expression graph just
like the `add()` operation, so it does not actually perform the assignment
until `run()` executes the expression.

You typically represent the parameters of a statistical model as a set of
Variables. For example, you would store the weights for a neural network as a
tensor in a Variable. During training you update this tensor by running a
training graph repeatedly.

## Fetches <a class="md-anchor" id="AUTOGENERATED-fetches"></a>

To fetch the outputs of operations, execute the graph with a `run()` call on
the `Session` object and pass in the tensors to retrieve. In the previous
example we fetched the single node `var`, but you can also fetch multiple
tensors:

```python
input1 = tf.constant(3.0)
input2 = tf.constant(2.0)
input3 = tf.constant(5.0)
intermed = tf.add(input2, input3)
mul = tf.mul(input1, intermed)

with tf.Session():
result = sess.run([mul, intermed])
print result

# output:
# [array([ 21.], dtype=float32), array([ 7.], dtype=float32)]
```

All the ops needed to produce the values of the requested tensors are run once
(not once per requested tensor).

## Feeds <a class="md-anchor" id="AUTOGENERATED-feeds"></a>

The examples above introduce tensors into the computation graph by storing them
in `Constants` and `Variables`. TensorFlow also provides a feed mechanism for
patching a tensor directly into any operation in the graph.

A feed temporarily replaces the output of an operation with a tensor value.
You supply feed data as an argument to a `run()` call. The feed is only used for
the run call to which it is passed. The most common use case involves
designating specific operations to be "feed" operations by using
tf.placeholder() to create them:

```python

input1 = tf.placeholder(tf.types.float32)
input2 = tf.placeholder(tf.types.float32)
output = tf.mul(input1, input2)

with tf.Session() as sess:
print sess.run([output], feed_dict={input1:[7.], input2:[2.]})

# output:
# [array([ 14.], dtype=float32)]
```

A `placeholder()` operation generates an error if you do not supply a feed for
it. See the
[MNIST fully-connected feed tutorial](../tutorials/mnist/tf/index.md)
([source code](https://tensorflow.googlesource.com/tensorflow/+/master/tensorflow/g3doc/tutorials/mnist/fully_connected_feed.py))
for a larger-scale example of feeds.

82 changes: 82 additions & 0 deletions SOURCE/get_started/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,82 @@
# Introduction <a class="md-anchor" id="AUTOGENERATED-introduction"></a>

Let's get you up and running with TensorFlow!

But before we even get started, let's give you a sneak peek at what TensorFlow
code looks like in the Python API, just so you have a sense of where we're
headed.

Here's a little Python program that makes up some data in three dimensions, and
then fits a plane to it.

```python
import tensorflow as tf
import numpy as np

# Make 100 phony data points in NumPy.
x_data = np.float32(np.random.rand(2, 100)) # Random input
y_data = np.dot([0.100, 0.200], x_data) + 0.300

# Construct a linear model.
b = tf.Variable(tf.zeros([1]))
W = tf.Variable(tf.random_uniform([1, 2], -1.0, 1.0))
y = tf.matmul(W, x_data) + b

# Minimize the squared errors.
loss = tf.reduce_mean(tf.square(y - y_data))
optimizer = tf.train.GradientDescentOptimizer(0.5)
train = optimizer.minimize(loss)

# For initializing the variables.
init = tf.initialize_all_variables()

# Launch the graph
sess = tf.Session()
sess.run(init)

# Fit the plane.
for step in xrange(0, 201):
sess.run(train)
if step % 20 == 0:
print step, sess.run(W), sess.run(b)

# Learns best fit is W: [[0.100 0.200]], b: [0.300]
```

To whet your appetite further, we suggest you check out what a classical
machine learning problem looks like in TensorFlow. In the land of neural
networks the most "classic" classical problem is the MNIST handwritten digit
classification. We offer two introductions here, one for machine learning
newbies, and one for pros. If you've already trained dozens of MNIST models in
other software packages, please take the red pill. If you've never even heard
of MNIST, definitely take the blue pill. If you're somewhere in between, we
suggest skimming blue, then red.

<div style="width:100%; margin:auto; margin-bottom:10px; margin-top:20px; display: flex; flex-direction: row">
<a href="../tutorials/mnist/beginners/index.md" title="MNIST for ML Beginners tutorial">
<img style="flex-grow:1; flex-shrink:1; border: 1px solid black;" src="blue_pill.png" alt="MNIST for machine learning beginners tutorial" />
</a>
<a href="../tutorials/mnist/pros/index.md" title="Deep MNIST for ML Experts tutorial">
<img style="flex-grow:1; flex-shrink:1; border: 1px solid black;" src="red_pill.png" alt="Deep MNIST for machine learning experts tutorial" />
</a>
</div>
<p style="font-size:10px;">Images licensed CC BY-SA 4.0; original by W. Carter</p>

If you're already sure you want to learn and install TensorFlow you can skip
these and charge ahead. Don't worry, you'll still get to see MNIST -- we'll
also use MNIST as an example in our technical tutorial where we elaborate on
TensorFlow features.

## Recommended Next Steps: <a class="md-anchor" id="AUTOGENERATED-recommended-next-steps-"></a>
* [Download and Setup](../get_started/os_setup.md)
* [Basic Usage](../get_started/basic_usage.md)
* [TensorFlow Mechanics 101](../tutorials/mnist/tf/index.md)


<div class='sections-order' style="display: none;">
<!--
<!-- os_setup.md -->
<!-- basic_usage.md -->
-->
</div>

Loading

0 comments on commit 5cb35a0

Please sign in to comment.