pretrained VGG16 model misclassifies even though val accuracy is high and val loss is low [closed]

半腔热情 提交于 2020-07-22 05:50:47

问题


I am new to Deep Learning and started with some tutorials, where I implemented VGG16 Net from Scratch. I wanted to classify integrated circuits in defect and non defect classes. I played around with it, changed the hyperparamters and got a really good result with high Acc (93 %) and low Loss. It classified really good with only 20 misclassifications (Number of test data was 580) (Source was this: https://towardsdatascience.com/step-by-step-vgg16-implementation-in-keras-for-beginners-a833c686ae6c)

Now I wanted to try out the pretrained models of Keras and implemented it with imagenet weights. First I tried to not freeze any layer and train it from "scratch" again. I got a really good result again but it classified all test images as defect. Then I tried to freeze the first 7 Layers and fine tune the rest and got better results again as my plot shows

But when I let the model predict my test images again it always predicts the ICs as defect with a 100% probability.

I don't understand why it looks so good when looking at the metrics but predicts so bad... Here is my model_create method:

train_data_path = train_path
    train_datagen = ImageDataGenerator(rescale=1./255, validation_split=0.25)

    train_generator = train_datagen.flow_from_directory(
                train_data_path,
                target_size=(self.img_height, self.img_width),
                batch_size=self.batch_size,
                class_mode='categorical',
                subset='training')

    validation_generator = train_datagen.flow_from_directory(
                train_data_path,
                target_size=(self.img_height, self.img_width),
                batch_size=self.batch_size,
                class_mode='categorical',
                subset='validation')

    #Since you're using the model for a different task, you'd want to remove the top
    vgg = VGG16(weights='imagenet', input_shape=(384, 384, 3), include_top=False)

    #Freeze layers 0 to x
    for layer in vgg.layers[0:7]:
        layer.trainable = False

    x = Flatten()(vgg.output)
    prediction = Dense(2, activation='softmax')(x)
    model = Model(inputs=vgg.input, outputs=prediction)
    model.summary()


    opt = Adam(lr=0.00001)
    # opt = RMSprop(lr=0.00001)
    model.compile(optimizer=opt, loss=keras.losses.binary_crossentropy, metrics=['acc'])

    model.summary()

    from keras.callbacks import ModelCheckpoint, EarlyStopping

    checkpoint = ModelCheckpoint('ScratchModel.h5', monitor='val_acc', verbose=1, save_best_only=True,
                                 save_weights_only=False, mode='auto', period=1)

    early = EarlyStopping(monitor='val_acc', min_delta=0, patience=20, verbose=1, mode='auto')

    hist = model.fit_generator(steps_per_epoch=self.batch_size, generator=train_generator,
                               validation_data=validation_generator, validation_steps=8, epochs=self.epochs,
                               callbacks=[checkpoint, early])

    plt.plot(hist.history['acc'])
    plt.plot(hist.history['val_acc'])
    plt.plot(hist.history['loss'])
    plt.plot(hist.history['val_loss'])
    plt.title("model accuracy")
    plt.ylabel("Accuracy")
    plt.xlabel("Epoch")
    plt.legend(["Accuracy", "Validation Accuracy", "loss", "Validation Loss"])
    plt.show()

    plt.figure(1)

Do I need to resize my images to the original 224x224 Pixels? Or is it because of my fully connected layer? When I implemented it from Scratch I used multiple Dense Layers and here I just used one.

I could use the one I trained from Scratch but I want to try out other architectures like resnet or inception and I just wanted to use the models from keras.

来源:https://stackoverflow.com/questions/63011042/pretrained-vgg16-model-misclassifies-even-though-val-accuracy-is-high-and-val-lo

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!