TensorFlow for binary classification

前端 未结 2 807
深忆病人
深忆病人 2020-12-05 03:12

I am trying to adapt this MNIST example to binary classification.

But when changing my NLABELS from NLABELS=2 to NLABELS=1, th

相关标签:
2条回答
  • 2020-12-05 03:36

    I've been looking for good examples of how to implement binary classification in TensorFlow in a similar manner to the way it would be done in Keras. I didn't find any, but after digging through the code a bit, I think I have it figured out. I modified the problem here to implement a solution that uses sigmoid_cross_entropy_with_logits the way Keras does under the hood.

    from __future__ import absolute_import
    from __future__ import division
    from __future__ import print_function
    
    from tensorflow.examples.tutorials.mnist import input_data
    import tensorflow as tf
    
    # Import data
    mnist = input_data.read_data_sets('data', one_hot=True)
    NLABELS = 1
    
    sess = tf.InteractiveSession()
    
    # Create the model
    x = tf.placeholder(tf.float32, [None, 784], name='x-input')
    W = tf.get_variable('weights', [784, NLABELS],
                        initializer=tf.truncated_normal_initializer()) * 0.1
    b = tf.Variable(tf.zeros([NLABELS], name='bias'))
    logits = tf.matmul(x, W) + b
    
    # Define loss and optimizer
    y_ = tf.placeholder(tf.float32, [None, NLABELS], name='y-input')
    
    # More name scopes will clean up the graph representation
    with tf.name_scope('cross_entropy'):
    
        #manual calculation : under the hood math, don't use this it will have gradient problems
        # entropy = tf.multiply(tf.log(tf.sigmoid(logits)), y_) + tf.multiply((1 - y_), tf.log(1 - tf.sigmoid(logits)))
        # loss = -tf.reduce_mean(entropy, name='loss')
    
        entropy = tf.nn.sigmoid_cross_entropy_with_logits(labels=y_, logits=logits)
        loss = tf.reduce_mean(entropy, name='loss')
    
    with tf.name_scope('train'):
        # Using Adam instead
        # train_step = tf.train.GradientDescentOptimizer(learning_rate=0.001).minimize(loss)
        train_step = tf.train.AdamOptimizer(learning_rate=0.002).minimize(loss)
    
    with tf.name_scope('test'):
        preds = tf.cast((logits > 0.5), tf.float32)
        correct_prediction = tf.equal(preds, y_)
        accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
    
    tf.initialize_all_variables().run()
    
    # Train the model, and feed in test data and record summaries every 10 steps
    
    for i in range(2000):
        if i % 100 == 0:  # Record summary data and the accuracy
            labels = mnist.test.labels[:, 0:NLABELS]
            feed = {x: mnist.test.images, y_: labels}
            result = sess.run([loss, accuracy], feed_dict=feed)
            print('Accuracy at step %s: %s - loss: %f' % (i, result[1], result[0]))
        else:
            batch_xs, batch_ys = mnist.train.next_batch(100)
            batch_ys = batch_ys[:, 0:NLABELS]
            feed = {x: batch_xs, y_: batch_ys}
        sess.run(train_step, feed_dict=feed)
    

    Training:

    Accuracy at step 0: 0.7373 - loss: 0.758670
    Accuracy at step 100: 0.9017 - loss: 0.423321
    Accuracy at step 200: 0.9031 - loss: 0.322541
    Accuracy at step 300: 0.9085 - loss: 0.255705
    Accuracy at step 400: 0.9188 - loss: 0.209892
    Accuracy at step 500: 0.9308 - loss: 0.178372
    Accuracy at step 600: 0.9453 - loss: 0.155927
    Accuracy at step 700: 0.9507 - loss: 0.139031
    Accuracy at step 800: 0.9556 - loss: 0.125855
    Accuracy at step 900: 0.9607 - loss: 0.115340
    Accuracy at step 1000: 0.9633 - loss: 0.106709
    Accuracy at step 1100: 0.9667 - loss: 0.099286
    Accuracy at step 1200: 0.971 - loss: 0.093048
    Accuracy at step 1300: 0.9714 - loss: 0.087915
    Accuracy at step 1400: 0.9745 - loss: 0.083300
    Accuracy at step 1500: 0.9745 - loss: 0.079019
    Accuracy at step 1600: 0.9761 - loss: 0.075164
    Accuracy at step 1700: 0.9768 - loss: 0.071803
    Accuracy at step 1800: 0.9777 - loss: 0.068825
    Accuracy at step 1900: 0.9788 - loss: 0.066270
    
    0 讨论(0)
  • 2020-12-05 03:38

    The original MNIST example uses a one-hot encoding to represent the labels in the data: this means that if there are NLABELS = 10 classes (as in MNIST), the target output is [1 0 0 0 0 0 0 0 0 0] for class 0, [0 1 0 0 0 0 0 0 0 0] for class 1, etc. The tf.nn.softmax() operator converts the logits computed by tf.matmul(x, W) + b into a probability distribution across the different output classes, which is then compared to the fed-in value for y_.

    If NLABELS = 1, this acts as if there were only a single class, and the tf.nn.softmax() op would compute a probability of 1.0 for that class, leading to a cross-entropy of 0.0, since tf.log(1.0) is 0.0 for all of the examples.

    There are (at least) two approaches you could try for binary classification:

    1. The simplest would be to set NLABELS = 2 for the two possible classes, and encode your training data as [1 0] for label 0 and [0 1] for label 1. This answer has a suggestion for how to do that.

    2. You could keep the labels as integers 0 and 1 and use tf.nn.sparse_softmax_cross_entropy_with_logits(), as suggested in this answer.

    0 讨论(0)
提交回复
热议问题