How to test tensorflow cifar10 cnn tutorial model

前端 未结 3 1065
灰色年华
灰色年华 2020-12-15 00:19

I am relatively new to machine-learning and currently have almost no experiencing in developing it.

So my Question is: after training and evaluating

相关标签:
3条回答
  • 2020-12-15 00:52

    The below example is not for the mnist tutorial, but a simple XOR example. Note the train() and test() methods. All that we declare & keep globally are the weights, biases, and session. In the test method we redefine the shape of the input and reuse the same weights & biases (and session) that we refined in training.

    import tensorflow as tf
    
    #parameters for the net
    w1 = tf.Variable(tf.random_uniform(shape=[2,2], minval=-1, maxval=1, name='weights1'))
    w2 = tf.Variable(tf.random_uniform(shape=[2,1], minval=-1, maxval=1, name='weights2'))
    
    #biases
    b1 = tf.Variable(tf.zeros([2]), name='bias1')
    b2 = tf.Variable(tf.zeros([1]), name='bias2')
    
    #tensorflow session
    sess = tf.Session()
    
    
    def train():
    
        #placeholders for the traning inputs (4 inputs with 2 features each) and outputs (4 outputs which have a value of 0 or 1)
        x = tf.placeholder(tf.float32, [4, 2], name='x-inputs')
        y = tf.placeholder(tf.float32, [4, 1], name='y-inputs')
    
        #set up the model calculations
        temp = tf.sigmoid(tf.matmul(x, w1) + b1)
        output = tf.sigmoid(tf.matmul(temp, w2) + b2)
    
        #cost function is avg error over training samples
        cost = tf.reduce_mean(((y * tf.log(output)) + ((1 - y) * tf.log(1.0 - output))) * -1)
    
        #training step is gradient descent
        train_step = tf.train.GradientDescentOptimizer(learning_rate=0.01).minimize(cost)
    
        #declare training data
        training_x = [[0,1], [0,0], [1,0], [1,1]]
        training_y = [[1], [0], [1], [0]]
    
        #init session
        init = tf.initialize_all_variables()
        sess.run(init)
    
        #training
        for i in range(100000):
            sess.run(train_step, feed_dict={x:training_x, y:training_y})
    
            if i % 1000 == 0:
                print (i, sess.run(cost, feed_dict={x:training_x, y:training_y}))
    
        print '\ntraining done\n'
    
    
    def test(inputs):
        #redefine the shape of the input to a single unit with 2 features
        xtest = tf.placeholder(tf.float32, [1, 2], name='x-inputs')
    
        #redefine the model in terms of that new input shape
        temp = tf.sigmoid(tf.matmul(xtest, w1) + b1)
        output = tf.sigmoid(tf.matmul(temp, w2) + b2)
    
        print (inputs, sess.run(output, feed_dict={xtest:[inputs]})[0, 0] >= 0.5)
    
    
    train()
    
    test([0,1])
    test([0,0])
    test([1,1])
    test([1,0])
    
    0 讨论(0)
  • 2020-12-15 01:02

    This isn't 100% the answer to the question, but it's a similar way of solving it, based on a MNIST NN training example suggested in the comments to the question.

    Based on the TensorFlow begginer MNIST tutorial, and thanks to this tutorial, this is a way of training and using your Neural Network with custom data.

    Please note that similar should be done for tutorials such as the CIFAR10, as @Yaroslav Bulatov mentioned in the comments.

    import input_data
    import datetime
    import numpy as np
    import tensorflow as tf
    import cv2
    from matplotlib import pyplot as plt
    import matplotlib.image as mpimg
    from random import randint
    
    
    mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
    
    x = tf.placeholder("float", [None, 784])
    
    W = tf.Variable(tf.zeros([784,10]))
    b = tf.Variable(tf.zeros([10]))
    
    y = tf.nn.softmax(tf.matmul(x,W) + b)
    y_ = tf.placeholder("float", [None,10])
    
    cross_entropy = -tf.reduce_sum(y_*tf.log(y))
    
    train_step = tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy)
    
    init = tf.initialize_all_variables()
    
    sess = tf.Session()
    sess.run(init)
    
    #Train our model
    iter = 1000
    for i in range(iter):
      batch_xs, batch_ys = mnist.train.next_batch(100)
      sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})
    
    #Evaluationg our model:
    correct_prediction=tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
    
    accuracy=tf.reduce_mean(tf.cast(correct_prediction,"float"))
    print "Accuracy: ", sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels})
    
    #1: Using our model to classify a random MNIST image from the original test set:
    num = randint(0, mnist.test.images.shape[0])
    img = mnist.test.images[num]
    
    classification = sess.run(tf.argmax(y, 1), feed_dict={x: [img]})
    '''
    #Uncomment this part if you want to plot the classified image.
    plt.imshow(img.reshape(28, 28), cmap=plt.cm.binary)
    plt.show()
    '''
    print 'Neural Network predicted', classification[0]
    print 'Real label is:', np.argmax(mnist.test.labels[num])
    
    
    #2: Using our model to classify MNIST digit from a custom image:
    
    # create an an array where we can store 1 picture
    images = np.zeros((1,784))
    # and the correct values
    correct_vals = np.zeros((1,10))
    
    # read the image
    gray = cv2.imread("my_digit.png", 0 ) #0=cv2.CV_LOAD_IMAGE_GRAYSCALE #must be .png!
    
    # rescale it
    gray = cv2.resize(255-gray, (28, 28))
    
    # save the processed images
    cv2.imwrite("my_grayscale_digit.png", gray)
    """
    all images in the training set have an range from 0-1
    and not from 0-255 so we divide our flatten images
    (a one dimensional vector with our 784 pixels)
    to use the same 0-1 based range
    """
    flatten = gray.flatten() / 255.0
    """
    we need to store the flatten image and generate
    the correct_vals array
    correct_val for a digit (9) would be
    [0,0,0,0,0,0,0,0,0,1]
    """
    images[0] = flatten
    
    
    my_classification = sess.run(tf.argmax(y, 1), feed_dict={x: [images[0]]})
    
    """
    we want to run the prediction and the accuracy function
    using our generated arrays (images and correct_vals)
    """
    print 'Neural Network predicted', my_classification[0], "for your digit"
    

    For further image conditioning (digits should be completely dark in a white background) and better NN training (accuracy>91%) please check the Advanced MNIST tutorial from TensorFlow or the 2nd tutorial i've mentioned.

    0 讨论(0)
  • 2020-12-15 01:10

    I recommend taking a look at the basic MNIST tutorial on the TensorFlow website. It looks like you define some function that generates the type of output that you want, and then run your session, passing it this evaluation function (correct_prediction below), and a dictionary containing whatever arguments you require (x and y_ below).

    If you have defined and trained some network that takes an input x, generates a response y based on your inputs, and you know your expected responses for your testing set y_, you may be able to print out every response to your testing set with something like:

    correct_prediction = tf.equal(y, y_)  % Check whether your prediction is correct
    print(sess.run(correct_prediction, feed_dict={x: test_images, y_: test_labels}))
    

    This is just a modification of what is done in the tutorial, where instead of trying to print each response, they determine the percent of correct responses. Also note that the tutorial uses one-hot vectors for the prediction y and actual value y_, so in order to return the associated numeral, they have to find which index of these vectors are equal to one with tf.argmax(y, 1).

    Edit

    In general, if you define something in your graph, you can output it later when you run your graph. Say you define something that determines the result of the softmax function on your output logits as:

    graph = tf.Graph()
    with graph.as_default():
      ...
      prediction = tf.nn.softmax(logits)
      ...
    

    then you can output this at run time with:

    with tf.Session(graph=graph) as sess:
      ...
      feed_dict = { ... }  # define your feed dictionary
      pred = sess.run([prediction], feed_dict=feed_dict)
      # do stuff with your prediction vector
    
    0 讨论(0)
提交回复
热议问题