Importing and managing datasets
bank = pd.read_csv("bank4.csv", index_col=False) tf.reset_default_graph() keep_prob = tf.placeholder(tf.float32) learning_rate = 0.003 x_data = bank.ix[:,0:9]; print(x_data) y_data = bank.ix[:, [-1]]; print(y_data) x_data = sklearn.preprocessing.scale(x_data).astype(np.float32); print(x_data) y_data = y_data.astype(np.float32)
Setting placeholder and weights with 3 layers.
X = tf.placeholder(tf.float32, [None, 9]); print(X) Y = tf.placeholder(tf.float32, [None, 1]) # Layer 1 W1 = tf.get_variable("weight1", shape=[9,15], dtype = tf.float32, initializer=tf.contrib.layers.xavier_initializer()) b1 = tf.get_variable("bias1", shape=[15], dtype = tf.float32, initializer=tf.contrib.layers.xavier_initializer()) layer1 = tf.nn.relu(tf.matmul(X, W1) + b1) layer1 = tf.nn.dropout(layer1, keep_prob=keep_prob) # Layer 2 W2 = tf.get_variable("weight2", shape=[15,15], dtype = tf.float32, initializer=tf.contrib.layers.xavier_initializer()) b2 = tf.get_variable("bias2", shape=[15], dtype = tf.float32, initializer=tf.contrib.layers.xavier_initializer()) layer2 = tf.nn.relu(tf.matmul(layer1, W2) + b2) layer2 = tf.nn.dropout(layer2, keep_prob=keep_prob) # Layer 3 W3 = tf.get_variable("weight3", shape=[15,15], dtype = tf.float32, initializer=tf.contrib.layers.xavier_initializer()) b3 = tf.get_variable("bias3", shape=[15], dtype = tf.float32, initializer=tf.contrib.layers.xavier_initializer()) layer3 = tf.nn.relu(tf.matmul(layer2, W3) + b3) layer3 = tf.nn.dropout(layer3, keep_prob=keep_prob) # Output Layer W4 = tf.get_variable("weight4", shape=[15,1], dtype = tf.float32, initializer=tf.contrib.layers.xavier_initializer()) b4 = tf.get_variable("bias4", shape=[1], dtype = tf.float32, initializer=tf.contrib.layers.xavier_initializer()) hypothesis = tf.sigmoid(tf.matmul(layer3, W4) + b4) hypothesis = tf.nn.dropout(hypothesis, keep_prob=keep_prob)
Defining cost function and optimizer.
cost = -tf.reduce_mean(Y * tf.log(hypothesis) + (1 - Y) * tf.log(1 - hypothesis)) train = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost) predicted = tf.cast(hypothesis > 0.5, dtype=tf.float32) accuracy = tf.reduce_mean(tf.cast(tf.equal(predicted, Y), dtype=tf.float32))
Training and accuracy test
# Launch graph with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for step in range(10001): sess.run(train, feed_dict={X: x_data, Y: y_data}) if step % 1000 == 0: print("step: ", step, sess.run(cost, feed_dict={X: x_data, Y: y_data}), sep="\n") # Accuracy report h, c, a = sess.run([hypothesis, predicted, accuracy], feed_dict={X: x_data, Y: y_data}) print("\nHypothesis: ", h, "\nCorrect: ", c, "\nAccuracy: ", a)
I have no idea why my NN is not working.
I constantly get a message "You must feed a value for placeholder tensor 'Placeholder' with dtype float" though all of them are float32.
Also, my dropout rate encounters feed_dict error. Please run the code and tell me what's wrong.