Tensorflow: feed dict error: You must feed a value for placeholder tensor

匿名 (未验证) 提交于 2019-12-03 02:56:01

问题:

I have one bug I cannot find out the reason. Here is the code:

with tf.Graph().as_default():         global_step = tf.Variable(0, trainable=False)          images = tf.placeholder(tf.float32, shape = [FLAGS.batch_size,33,33,1])         labels = tf.placeholder(tf.float32, shape = [FLAGS.batch_size,21,21,1])          logits = inference(images)         losses = loss(logits, labels)         train_op = train(losses, global_step)         saver = tf.train.Saver(tf.all_variables())         summary_op = tf.merge_all_summaries()         init = tf.initialize_all_variables()          sess = tf.Session()         sess.run(init)                                                           summary_writer = tf.train.SummaryWriter(FLAGS.train_dir, sess.graph)          for step in xrange(FLAGS.max_steps):             start_time = time.time()              data_batch, label_batch = SRCNN_inputs.next_batch(np_data, np_label,                                                               FLAGS.batch_size)               _, loss_value = sess.run([train_op, losses], feed_dict={images: data_batch, labels: label_batch})              duration = time.time() - start_time  def next_batch(np_data, np_label, batchsize,                 training_number = NUM_EXAMPLES_PER_EPOCH_TRAIN):      perm = np.arange(training_number)     np.random.shuffle(perm)     data = np_data[perm]     label = np_label[perm]     data_batch = data[0:batchsize,:]     label_batch = label[0:batchsize,:]   return data_batch, label_batch 

where np_data is the whole training samples read from hdf5 file, and the same to np_label.

After I run the code, I got the error like this :

2016-07-07 11:16:36.900831: step 0, loss = 55.22 (218.9 examples/sec; 0.585 sec/batch) Traceback (most recent call last):    File "<ipython-input-1-19672e1f8f12>", line 1, in <module>     runfile('/home/kang/Documents/work_code_PC1/tf_SRCNN/SRCNN_train.py', wdir='/home/kang/Documents/work_code_PC1/tf_SRCNN')    File "/usr/lib/python3/dist-packages/spyderlib/widgets/externalshell/sitecustomize.py", line 685, in runfile     execfile(filename, namespace)    File "/usr/lib/python3/dist-packages/spyderlib/widgets/externalshell/sitecustomize.py", line 85, in execfile     exec(compile(open(filename, 'rb').read(), filename, 'exec'), namespace)    File "/home/kang/Documents/work_code_PC1/tf_SRCNN/SRCNN_train.py", line 155, in <module>     train_test()    File "/home/kang/Documents/work_code_PC1/tf_SRCNN/SRCNN_train.py", line 146, in train_test     summary_str = sess.run(summary_op)    File "/usr/local/lib/python3.4/dist-packages/tensorflow/python/client/session.py", line 372, in run     run_metadata_ptr)    File "/usr/local/lib/python3.4/dist-packages/tensorflow/python/client/session.py", line 636, in _run     feed_dict_string, options, run_metadata)    File "/usr/local/lib/python3.4/dist-packages/tensorflow/python/client/session.py", line 708, in _do_run     target_list, options, run_metadata)    File "/usr/local/lib/python3.4/dist-packages/tensorflow/python/client/session.py", line 728, in _do_call     raise type(e)(node_def, op, message)  InvalidArgumentError: You must feed a value for placeholder tensor 'Placeholder' with dtype float and shape [128,33,33,1]      [[Node: Placeholder = Placeholder[dtype=DT_FLOAT, shape=[128,33,33,1], _device="/job:localhost/replica:0/task:0/gpu:0"]()]]      [[Node: truediv/_74 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/gpu:0", send_device_incarnation=1, tensor_name="edge_56_truediv", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"]()]] Caused by op 'Placeholder', defined at: 

So, It shows that for the step 0 it has the result, which means that the data has been fed into the Placeholders.

But why does it come the error of feeding data into Placeholder in the next time?

When I try to comment the code summary_op = tf.merge_all_summaries() and the code works fine. why is it the case?

回答1:

When I try to comment the code summary_op = tf.merge_all_summaries() and the code works fine. why is it the case?

summary_op is an operation. If there exists (and this is true in your case) a summary operation related to the result of another operation that depends upon the values of the placeholders, you have to feed the graph the required values.

So, your line summary_str = sess.run(summary_op) needs the dictionary of the values to store.

Usually, instead of re-executing the operations to log the values, you run the operations and the summary_op once.

Do something like

if step % LOGGING_TIME_STEP == 0:     _, loss_value, summary_str = sess.run([train_op, losses, summary_op], feed_dict={images: data_batch, labels: label_batch}) else:     _, loss_value = sess.run([train_op, losses], feed_dict={images: data_batch, labels: label_batch}) 


易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!