how to feed DataGenerator for KERAS multilabel issue?

烂漫一生 提交于 2020-02-25 04:15:31

问题


I am working on a multilabel classification problem with KERAS. When i execute the code like this i get the following error:

ValueError: Error when checking target: expected activation_19 to have 2 dimensions, but got array with shape (32, 6, 6)

This is because of my lists full of "0" and "1" in the labels dictionary, which dont fit to keras.utils.to_categorical in return statement, as i learned recently. softmax cant handle more than one "1" as well.

I guess I first need a Label_Encoder and afterwards One_Hot_Encoding for labels, to avoid multiple "1" in labels, which dont go together with softmax.

I hope someone can give me a hint how to preprocess or transform labels data, to get the code fixed. I will appreciate a lot. Even a code snippet would be awesome.

csv looks like this:

Filename  label1  label2  label3  label4  ...   ID
abc1.jpg    1       0       0       1     ...  id-1
def2.jpg    0       1       0       1     ...  id-2
ghi3.jpg    0       0       0       1     ...  id-3
...

current data to feed looks like this:

partition: {'train': ['id-1','id-2','id-3',...], 'validation': ['id-7','id-14','id-21',...]}
labels:    {'id-0': [1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1],
            'id-1': [0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1],
            'id-2': [0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1],
             ...}

My images are converted to arrays and saved in single npy files in seperate folder as you can see in DataGenerator. id-1.npy, id-2.npy...

import numpy as np
import keras
from keras.layers import *
from keras.models import Sequential

class DataGenerator(keras.utils.Sequence):
    'Generates data for Keras'
    def __init__(self, list_IDs, labels, batch_size=32, dim=(224,224), n_channels=3,
                 n_classes=21, shuffle=True):
        'Initialization'
        self.dim = dim
        self.batch_size = batch_size
        self.labels = labels
        self.list_IDs = list_IDs
        self.n_channels = n_channels
        self.n_classes = n_classes
        self.shuffle = shuffle
        self.on_epoch_end()

    def __len__(self):
        'Denotes the number of batches per epoch'
        return int(np.floor(len(self.list_IDs) / self.batch_size))

    def __getitem__(self, index):
        'Generate one batch of data'
        # Generate indexes of the batch
        indexes = self.indexes[index*self.batch_size:(index+1)*self.batch_size]

        # Find list of IDs
        list_IDs_temp = [self.list_IDs[k] for k in indexes]

        # Generate data
        X, y = self.__data_generation(list_IDs_temp)

        return X, y

    def on_epoch_end(self):
        'Updates indexes after each epoch'
        self.indexes = np.arange(len(self.list_IDs))
        if self.shuffle == True:
            np.random.shuffle(self.indexes)

    def __data_generation(self, list_IDs_temp):
        'Generates data containing batch_size samples' # X : (n_samples, *dim, n_channels)
        # Initialization
        X = np.empty((self.batch_size, *self.dim, self.n_channels))
        y = np.empty((self.batch_size, self.n_classes), dtype=int)

        # Generate data
        for i, ID in enumerate(list_IDs_temp):
            # Store sample
            X[i,] = np.load('Folder with npy files/' + ID + '.npy')

            # Store class
            y[i] = self.labels[ID]

        return X, keras.utils.to_categorical(y, num_classes=self.n_classes)

-----------------------

# Parameters
params = {'dim': (224, 224),
          'batch_size': 32,
          'n_classes': 21,
          'n_channels': 3,
          'shuffle': True}

# Datasets
partition = partition
labels = labels

# Generators
training_generator = DataGenerator(partition['train'], labels, **params)
validation_generator = DataGenerator(partition['validation'], labels, **params)

# Design model
model = Sequential()

model.add(Conv2D(32, (3,3), input_shape=(224, 224, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2,2)))

...

model.add(Flatten())
model.add(Dense(64))
model.add(Activation('relu'))
model.add(Dense(21))
model.add(Activation('softmax'))

model.compile(loss='categorical_crossentropy', optimizer='rmsprop', metrics=['accuracy'])

# Train model on dataset
model.fit_generator(generator=training_generator,
                    validation_data=validation_generator)

回答1:


Since you already have the labels as a vector of 21 elements of 0 and 1, you shouldn't use keras.utils.to_categorical in the function __data_generation(self, list_IDs_temp). Just return X and y.




回答2:


Ok i have a solution but i'm not sure that's the best .. :

from sklearn import preprocessing #for LAbelEncoder


labels_list = [x[1] for x in labels.items()] #get the list of all sequences

def convert(list):  
    res = int("".join(map(str, list)))

    return res

label_int = [convert(i) for i in labels_list] #Convert each sequence to int 

print(label_int) #E.g : [1,2,3] become 123


le = preprocessing.LabelEncoder()
le.fit(label_int)
labels = le.classes_   #Encode each int to only get the uniques
print(labels)
d = dict([(y,x) for x,y in enumerate(labels)])   #map each unique sequence to an label like 0, 1, 2, 3 ...
print(d)

labels_encoded = [d[i] for i in label_int]  #get all the sequence and encode them with label obtained 
print(labels_encoded)

labels_encoded = to_categorical(labels_encoded) #encode to_cagetorical 
print(labels_encoded)

This is not really clean i think, but it's working

You need to change your last Dense layer to have a number of neurones equal to the lenght of the labels_encoded sequences.

For the predictions, you will have the dict "d" that map the predicted value to your orginal sequence style.

Tell me if you need clarifications !

For a few test sequences, it's gives you that :

labels = {'id-0': [1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1],
          'id-1': [0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1],
          'id-2': [0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1],
          'id-3': [1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1],
          'id-4': [0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1]}

[100100001100000001011, 10100001100000000001, 100001100010000001, 100100001100000001011, 10100001100000000001]
[100001100010000001 10100001100000000001 100100001100000001011]
{100001100010000001: 0, 10100001100000000001: 1, 100100001100000001011: 2}
[2, 1, 0, 2, 1]
[[0. 0. 1.]
 [0. 1. 0.]
 [1. 0. 0.]
 [0. 0. 1.]
 [0. 1. 0.]]

EDIT after clarification :

Ok i read a little more about the subject, once more the problem of softmax is that it will try to maximize a class while minize the others.
So i would sugest to keep your arrays of 21 ones's and zeros's but instead of using Softmax, use Sigmoid (to predict a probability between 0 and 1 for each class) with binary_crossentropy.

And use a treshold for your predictions :

preds = model.predict(X_test)
preds[preds>=0.5] = 1
preds[preds<0.5] = 0

Keep me in touch of the results !



来源:https://stackoverflow.com/questions/60322652/how-to-feed-datagenerator-for-keras-multilabel-issue

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!