Tensorflow ValueError: Too many vaues to unpack (expected 2)

你离开我真会死。 提交于 2019-12-12 10:01:09

问题


I have looked this up on Reddit, Stack Overflow, tech forums, documentation, GitHub issues etc etc and still can't solve this issue.

For reference, I am using Python 3 TensorFlow on Windows 10, 64 Bit.

I am trying to use my own dataset (300 pics of cats, 512x512, .png format) in Tensorflow to train it to know what a cat looks like. If this works I will train it with other animals and eventually objects.

I can't seem to figure out why I am getting the error ValueError: too many values to unpack (expected 2). The error appears in the line images,labal = create_batches(10), which points to my function create_batches (see below). I don't know what could be causing this as I am fairly new to TensorFlow. I am trying to make my own Neural Network based on the MNIST Dataset. Code below:

import tensorflow as tf
import numpy as np
import os
import sys
import cv2


content = []
labels_list = []
with open("data/cats/files.txt") as ff:
    for line in ff:
        line = line.rstrip()
        content.append(line)

with open("data/cats/labels.txt") as fff:
    for linee in fff:
        linee = linee.rstrip()
        labels_list.append(linee)

def create_batches(batch_size):
    images = []
    for img in content:
        #f = open(img,'rb')
        #thedata = f.read().decode('utf8')
        thedata = cv2.imread(img)
        thedata = tf.contrib.layers.flatten(thedata)
        images.append(thedata)
    images = np.asarray(images)

    labels =tf.convert_to_tensor(labels_list,dtype=tf.string)

    print(content)
    #print(labels_list)

    while(True):
        for i in range(0,298,10):
            yield images[i:i+batch_size],labels_list[i:i+batch_size]


imgs = tf.placeholder(dtype=tf.float32,shape=[None,262144])
lbls = tf.placeholder(dtype=tf.float32,shape=[None,10])

W = tf.Variable(tf.zeros([262144,10]))
b = tf.Variable(tf.zeros([10]))

y_ = tf.nn.softmax(tf.matmul(imgs,W) + b)

cross_entropy = tf.reduce_mean(-tf.reduce_sum(lbls * tf.log(y_),reduction_indices=[1]))
train_step = tf.train.GradientDescentOptimizer(0.05).minimize(cross_entropy)

sess = tf.InteractiveSession()
tf.global_variables_initializer().run()
for i in range(10000):
    images,labal = create_batches(10)
    sess.run(train_step, feed_dict={imgs:images, lbls: labal})

correct_prediction = tf.equal(tf.argmax(y_,1),tf.argmax(lbls,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction,tf.float32))

print(sess.run(accuracy, feed_dict={imgs:content, lbls:labels_list}))

And the Error:

Traceback (most recent call last):
  File "B:\Josh\Programming\Python\imgpredict\predict.py", line 54, in <module>

    images,labal = create_batches(2)
ValueError: too many values to unpack (expected 2)
libpng warning: iCCP: known incorrect sRGB profile
libpng warning: iCCP: known incorrect sRGB profile
libpng warning: iCCP: known incorrect sRGB profile
libpng warning: iCCP: known incorrect sRGB profile
(A few hundred lines of this)
libpng warning: iCCP: known incorrect sRGB profile
libpng warning: iCCP: known incorrect sRGB profile
libpng warning: iCCP: known incorrect sRGB profile

My GitHub link link if anyone needs it. The project folder is the "imgpredict".


回答1:


You are yielding your results in an incorrect way:

yield(images[i:i+batch_size]) #,labels_list[i:i+batch_size])

which gives you one value that is yielded, but when you call you method you are expecting two values yielded:

images,labal = create_batches(10)

Either yield two values , like:

yield (images[i:i+batch_size] , labels_list[i:i+batch_size])

(uncomment) or just expect one.

Edit: You should use parentheses on both the yield and when receiving the results like this:

#when yielding, remember that yield returns a Generator, therefore the ()
yield (images[i:i+batch_size] , labels_list[i:i+batch_size])

#When receiving also, even though this is not correct
(images,labal) = create_batches(10)

However this is not the way I have used the yield option; one usually iterates over your method that returns the generator, in your case it should look something like this:

#do the training several times as you have
for i in range(10000):
    #now here you should iterate over your generator, in order to gain its benefits
    #that is you dont load the entire result set into memory
    #remember to receive with () as mentioned
    for (images, labal) in create_batches(10):
        #do whatever you want with that data
        sess.run(train_step, feed_dict={imgs:images, lbls: labal})

You can also check this question regarding the user of yield and generators.




回答2:


You commented out the second return item.

        yield(images[i:i+batch_size])    #,labels_list[i:i+batch_size])

You yield a single list to assign to images, and there's nothing left for labal. Remove that comment mark, or yield a dummy value if you're in debugging mode.


UPDATE

Separate this line and check what you're trying to return:

result = (images[i:i+batch_size],
          labels_list[i:i+batch_size])
print len(result), result
return result


来源:https://stackoverflow.com/questions/45022315/tensorflow-valueerror-too-many-vaues-to-unpack-expected-2

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!