从mnist看tensorflow运行方式

不打扰是莪最后的温柔 提交于 2019-12-05 07:50:22

目录

tensorflow是如何运行的:

  • 将计算流程表示成图
  • 通过sessions执行图的计算
  • 数据表示为张量(tensors)
  • 用Variables表示状态(如权重)
  • 分别使用feeds和fetches来填充和取出数据

有向图:

tensorflow的计算是由图表示的。下面是一个mnist例子改编的例子,用于展示tensorflow的有向图定义,也用于展示tensorboard的用法。

这里写图片描述

  1. 节点:节点如layer1,cross_entropy代表的是运算,input也可以看做是输入操作
  2. 有向箭头表示数据的流向

从图中可以看出,输入数据之后,可以通过tf.reshape()函数对于输入的张量进行reshape;也可以构造一个网络:网络第一层:layer1, drop_out层,第二层:layer2,然后计算交叉熵(利用input,layer2的数据),计算准确率(利用input,layer2)。我们获得了除每一层维度之外所有有关计算的信息。

import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
max_steps = 1000
learning_rate = 0.001
dropout = 0.9
data_dir = "MNIST_data"
log_dir = "log"

mnist = input_data.read_data_sets(data_dir,one_hot = True)
sess = tf.InteractiveSession()

with tf.name_scope("input"):
    x = tf.placeholder(tf.float32, [None,784], name = 'x-input')
    y_ = tf.placeholder(tf.float32, [None, 10], name = 'y-input')

with tf.name_scope('input-reshape'):
    image_shaped_input = tf.reshape(x, [-1,28,28,1])
    tf.summary.image('input',image_shaped_input,10)

def weight_variable(shape):
    initial = tf.truncated_normal(shape, stddev=0.1)
    return tf.Variable(initial)

def bias_variable(shape):
    initial = tf.constant(0.1, shape = shape)
    return tf.Variable(initial)

def nn_layer(input_tensor, input_dim, output_dim, layer_name, act = tf.nn.relu):
    with tf.name_scope(layer_name):
        with tf.name_scope('weights'):
            weights = weight_variable([input_dim, output_dim])
        with tf.name_scope('biases'):
            biases = bias_variable([output_dim])
        with tf.name_scope('Wx_plus_b'):
            preactive = tf.matmul(input_tensor, weights) + biases
        activations = act(preactive, name = 'activation')
        return activations

hidden1 = nn_layer(x, 784, 500, 'layer1')

with tf.name_scope('dropout'):
    keep_prob = tf.placeholder(tf.float32)
    dropped = tf.nn.dropout(hidden1, keep_prob)
y = nn_layer(dropped, 500, 10, 'layer2', act = tf.identity)

with tf.name_scope('cross_entropy'):
    diff = tf.nn.softmax_cross_entropy_with_logits(logits=y, labels = y_)
    with tf.name_scope('total'):
        cross_entropy = tf.reduce_mean(diff)
with tf.name_scope('train'):
    train_step = tf.train.AdamOptimizer(learning_rate).minimize(cross_entropy)
with tf.name_scope('accuracy'):
    with tf.name_scope('correct_prediction'):
        correct_prediction = tf.equal(tf.argmax(y,1),tf.argmax(y_,1))
    with tf.name_scope('accuracy'):
        accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))

merged = tf.summary.merge_all()
train_writer = tf.summary.FileWriter(log_dir,sess.graph)
tf.global_variables_initializer().run()

def feed_dict(train):
    if train:
        xs,ys = mnist.train.next_batch(100)
        k = dropout
    else:
        xs,ys = mnist.test.images, mnist.test.labels
        k = 1.0
    return {x: xs, y_: ys, keep_prob: k}

saver = tf.train.Saver()
for i in range(max_steps):
    if i%100 == 99:
        run_options = tf.RunOptions(trace_level = tf.RunOptions.FULL_TRACE)
        run_metadata = tf.RunMetadata()
        summary, _ = sess.run([merged, train_step], feed_dict = feed_dict(True), options = run_options, run_metadata = run_metadata)
        train_writer.add_run_metadata(run_metadata, 'step%03d' % i)
        train_writer.add_summary(summary, i)
        saver.save(sess, log_dir+'model.ckpt',i)
        print('Adding run metadata for ',i)
    else:
        summary, _ = sess.run([merged, train_step], feed_dict = feed_dict(True))
        train_writer.add_summary(summary, i)
train_writer.close()

会话

在第一次的代码和上述代码中,我们都可以看到一个熟悉的代码:sess,也就是会话。tensorflow中,graph定义了运算,Session将图的节点放进了计算设备(CPU或GPU),通过方法执行它们,返回tensor。

#定义会话
sess = tf.InteractiveSession()

#通过session执行图定义的操作
summary, _ = sess.run([merged, train_step], feed_dict = feed_dict(True), options = run_options, run_metadata = run_metadata)

Tensor

tensor可以看作是一个高维数组,例如,在上面代码段中,

with tf.name_scope("input"):
    x = tf.placeholder(tf.float32, [None,784], name = 'x-input')
    y_ = tf.placeholder(tf.float32, [None, 10], name = 'y-input
')

with tf.name_scope('input-reshape'):
    image_shaped_input = tf.reshape(x, [-1,28,28,1])

x, y都是一个tensor,其成员有很多
这里写图片描述也可以通过tf.reshape()函数改变其形状。
一般来说,tensor的第一维都是batch_size,也就是一个batch要处理的样本数目,(一般placeholder里面设置成None)。后面的维度跟要处理的数据有关。mnist数据集里面由于将图像都拉直成一维数组,所以x的维度是[None, 784]。

变量与常量

常量是简单的,下面展示了如何定义一个常量:

con = tf.constant(0.0,shape=[1,2],dtype=tf.float32,name='constant1')

变量在图执行的时候用于保持自己的状态,如一个储存weight的变量用于保存网络某层的权重值:

weight = tf.Variable(tf.random_normal([2,3], stddev=1, seed = 1), name = 'weight')

注意,变量在使用之前要初始化:

tf.initialize_all_variables().run()
#or
#tf.global_variables_initializer().run()

用fetches取出数据,用feeds输入数据

考察下面代码:

input1 = tf.placeholder(tf.float32)
input2 = tf.placeholder(tf.float32)
mul = tf.mul(input1,input2)

with tf.Session as sess:
    result = sess.run([mul])
    print(result)

在这段代码中,result就是fetch出来的数据,也就是定义mul运算的结果。

填充数据:
tensorflow用placeholder提供装数据输入的容器(占位符)。

#上面代码改造
input1 = tf.constant(1.0)
input2 = tf.constant(2.0)
mul = tf.mul(input1,input2)

with tf.Session as sess:
    result = sess.run([mul], feed_dict={input1:[7.0], input2: [2.0]})
    print(result)
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!