Tensorflow model saving and loading

瘦欲@ 提交于 2019-12-12 21:34:07

问题


How can save a tensorflow model with model graph like we do in do keras. Instead of defining the whole graph again in prediction file, can we save whole model ( weight and graph) and import it later

In Keras:

checkpoint = ModelCheckpoint('RightLane-{epoch:03d}.h5',monitor='val_loss', verbose=0,  save_best_only=False, mode='auto')

will give one h5 file that we can use for prediction

model = load_model("RightLane-030.h5")

how to do same in native tensorflow


回答1:


Method 1: Freeze graph and weights in one file (retraining might not be possible)

This option shows how to save the graph and weights in one file. Its intended use case is for deploying/sharing a model after it has been trained. To this end, we will use the protobuf (pb) format.

Given a tensorflow session (and graph), you can generate a protobuf with

# freeze variables
output_graph_def = tf.graph_util.convert_variables_to_constants(
                               sess=sess,
                               input_graph_def =sess.graph.as_graph_def(),
                               output_node_names=['myMode/conv/output'])

# write protobuf to disk
with tf.gfile.GFile('graph.pb', "wb") as f:
    f.write(output_graph_def.SerializeToString())

where output_node_names expects a list of name strings for the result nodes of the graph (cf. tensorflow documentation).

Then, you can load the protobuf and get the graph with its weight to perform forward passes easily.

with tf.gfile.GFile(path_to_pb, "rb") as f:
    graph_def = tf.GraphDef()
    graph_def.ParseFromString(f.read())
with tf.Graph().as_default() as graph:
    tf.import_graph_def(graph_def, name='')
    return graph

Method 2: Restoring metagraph and checkpoint (easy retraining)

If you want to be able to continue training the model, you might need to restore the full graph, i.e. the weights but also the loss function, some gradient informations (for Adam optimiser for instance), etc.

You need the meta and the checkpoint files generated by tensorflow when you use

saver = tf.train.Saver(...variables...)
saver.save(sess, 'my-model')

This will generate two files, my-model and my-model.meta.

From these two files, you can load the graph with:

  new_saver = tf.train.import_meta_graph('my-model.meta')
  new_saver.restore(sess, 'my-model')

For more details, you can look at the official documentation.




回答2:


This is a complete example based on tensorflow github. I copied it from another reply I did elsewhere on SO. There's probably other/better ways to do this somewhere.

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
import pandas as pd
import argparse
import sys
import tempfile
​
from tensorflow.examples.tutorials.mnist import input_data
​
import tensorflow as tf
​
FLAGS = None
​
​
def deepnn(x):
  """deepnn builds the graph for a deep net for classifying digits.
​
  Args:
    x: an input tensor with the dimensions (N_examples, 784), where 784 is the
    number of pixels in a standard MNIST image.
​
  Returns:
    A tuple (y, keep_prob). y is a tensor of shape (N_examples, 10), with values
    equal to the logits of classifying the digit into one of 10 classes (the
    digits 0-9). keep_prob is a scalar placeholder for the probability of
    dropout.
  """
  # Reshape to use within a convolutional neural net.
  # Last dimension is for "features" - there is only one here, since images are
  # grayscale -- it would be 3 for an RGB image, 4 for RGBA, etc.
  with tf.name_scope('reshape'):
    x_image = tf.reshape(x, [-1, 28, 28, 1])
​
  # First convolutional layer - maps one grayscale image to 32 feature maps.
  with tf.name_scope('conv1'):
    W_conv1 = weight_variable([5, 5, 1, 32])
    b_conv1 = bias_variable([32])
    h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1)
​
  # Pooling layer - downsamples by 2X.
  with tf.name_scope('pool1'):
    h_pool1 = max_pool_2x2(h_conv1)
​
  # Second convolutional layer -- maps 32 feature maps to 64.
  with tf.name_scope('conv2'):
    W_conv2 = weight_variable([5, 5, 32, 64])
    b_conv2 = bias_variable([64])
    h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)
​
  # Second pooling layer.
  with tf.name_scope('pool2'):
    h_pool2 = max_pool_2x2(h_conv2)
​
  # Fully connected layer 1 -- after 2 round of downsampling, our 28x28 image
  # is down to 7x7x64 feature maps -- maps this to 1024 features.
  with tf.name_scope('fc1'):
    W_fc1 = weight_variable([7 * 7 * 64, 1024])
    b_fc1 = bias_variable([1024])
​
    h_pool2_flat = tf.reshape(h_pool2, [-1, 7 * 7 * 64])
    h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)
​
  # Dropout - controls the complexity of the model, prevents co-adaptation of
  # features.
​
  keep_prob = tf.placeholder_with_default(1.0,())
  h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)
​
  # Map the 1024 features to 10 classes, one for each digit
  with tf.name_scope('fc2'):
    W_fc2 = weight_variable([1024, 10])
    b_fc2 = bias_variable([10])
​
    y_conv = tf.matmul(h_fc1_drop, W_fc2) + b_fc2
  return y_conv, keep_prob
​
​
def conv2d(x, W):
  """conv2d returns a 2d convolution layer with full stride."""
  return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME')
​
​
def max_pool_2x2(x):
  """max_pool_2x2 downsamples a feature map by 2X."""
  return tf.nn.max_pool(x, ksize=[1, 2, 2, 1],
                        strides=[1, 2, 2, 1], padding='SAME')
​
​
def weight_variable(shape):
  """weight_variable generates a weight variable of a given shape."""
  initial = tf.truncated_normal(shape, stddev=0.1)
  return tf.Variable(initial)
​
​
def bias_variable(shape):
  """bias_variable generates a bias variable of a given shape."""
  initial = tf.constant(0.1, shape=shape)
  return tf.Variable(initial)
​
​
# Import data
mnist = input_data.read_data_sets("/tmp")
# Create the model
x = tf.placeholder(tf.float32, [None, 784], name="x")
# Define loss and optimizer
y_ = tf.placeholder(tf.int64, [None])
# Build the graph for the deep net
y_conv, keep_prob = deepnn(x)
with tf.name_scope('loss'):
    cross_entropy = tf.losses.sparse_softmax_cross_entropy(
        labels=y_, logits=y_conv)
    cross_entropy = tf.reduce_mean(cross_entropy)
​
with tf.name_scope('adam_optimizer'):
    train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)
​
with tf.name_scope('accuracy'):
    correct_prediction = tf.equal(tf.argmax(y_conv, 1), y_)
    correct_prediction = tf.cast(correct_prediction, tf.float32)
    accuracy = tf.reduce_mean(correct_prediction)
​
graph_location = tempfile.mkdtemp()
print('Saving graph to: %s' % graph_location)
train_writer = tf.summary.FileWriter(graph_location)
train_writer.add_graph(tf.get_default_graph())
​
with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    for i in range(1000):
      batch = mnist.train.next_batch(50)
      if i % 100 == 0:
        train_accuracy = accuracy.eval(feed_dict={
            x: batch[0], y_: batch[1], keep_prob: 1.0})
        print('step %d, training accuracy %g' % (i, train_accuracy))
      train_step.run(feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5})
​
    print('test accuracy %g' % accuracy.eval(feed_dict={
        x: mnist.test.images, y_: mnist.test.labels, keep_prob: 1.0}))

    simg = np.reshape(mnist.test.images[0],(-1,784))    
    output = sess.run(y_conv,feed_dict={x:simg,keep_prob:1.0})
    print(tf.argmax(output,1).eval())
    saver = tf.train.Saver()
    saver.save(sess,"/tmp/network")

Restore from a new python run:

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

import tensorflow as tf
import numpy as np

import argparse
import sys
import tempfile
from tensorflow.examples.tutorials.mnist import input_data

sess =  tf.Session() 
saver = tf.train.import_meta_graph('/tmp/network.meta')
saver.restore(sess,tf.train.latest_checkpoint('/tmp'))
graph = tf.get_default_graph()
mnist = input_data.read_data_sets("/tmp")
simg = np.reshape(mnist.test.images[0],(-1,784))
op_to_restore = graph.get_tensor_by_name("fc2/MatMul:0")
x = graph.get_tensor_by_name("x:0")
output = sess.run(op_to_restore,feed_dict= {x:simg})
print("Result = ", np.argmax(output))


来源:https://stackoverflow.com/questions/51322381/tensorflow-model-saving-and-loading

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!