Why does my model predict the same label?

前端 未结 2 912
盖世英雄少女心
盖世英雄少女心 2021-01-21 01:15

I am training a small network and the training seems to go fine, the val loss decreases, I reach validation accuracy around 80, and it actually stops training once there is no m

2条回答
  •  难免孤独
    2021-01-21 01:50

    One of the problems that could lead to such behavior is imbalanced dataset. Your model found out that if it predicts the dominant class each time, it would get a good results.

    There are many ways to tackle an imbalance dataset. Here is a good tutorial. One of the easiest yet powerful solution is to apply higher penalty to your loss if it wrongly predicted the smaller class. This can be implemented in keras by setting the parameter class_weight in the fitor fit_generator function.

    It can be a dictionary of example:

    class_weight = {0: 0.75, 1: 0.25}  # does not necessarily add to up 1.
    history = model.fit_generator(train_generator,
                                  steps_per_epoch=train_generator.n // train_generator.batch_size,
                                  epochs=epochs,
                                  class_weight= class_weight,  # this is the important part
                                  validation_data=val_generator,
                                  validation_steps=val_generator.n // val_generator.batch_size,
                                  callbacks=[earlyStopping, mcp_save]) #, reduce_lr_loss])
    

提交回复
热议问题