Why is accuracy from fit_generator different to that from evaluate_generator in Keras?

后端 未结 3 1563
别跟我提以往
别跟我提以往 2020-12-16 20:40

What I do:

  • I am training a pre-trained CNN with Keras fit_generator(). This produces evaluation metrics (loss, acc, val_los
3条回答
  •  天涯浪人
    2020-12-16 21:05

    I now managed having the same evaluation metrics. I changed the following:

    • I set seed in flow_from_directory() as suggested by @Anakin
    def generate_data(path, imagesize, nBatches):
            datagen = ImageDataGenerator(rescale=1./255)
            generator = datagen.flow_from_directory(directory=path,     # path to the target directory
                 target_size=(imagesize,imagesize),                     # dimensions to which all images found will be resize
                 color_mode='rgb',                                      # whether the images will be converted to have 1, 3, or 4 channels
                 classes=None,                                          # optional list of class subdirectories
                 class_mode='categorical',                              # type of label arrays that are returned
                 batch_size=nBatches,                                   # size of the batches of data
                 shuffle=True,                                          # whether to shuffle the data
                 seed=42)                                               # random seed for shuffling and transformations
            return generator
    

    • I set use_multiprocessing=False in fit_generator() according to the warning: use_multiprocessing=True and multiple workers may duplicate your data
    history = model.fit_generator(generator=trainGenerator,
                                      steps_per_epoch=trainGenerator.samples//nBatches,     # total number of steps (batches of samples)
                                      epochs=nEpochs,                   # number of epochs to train the model
                                      verbose=2,                        # verbosity mode. 0 = silent, 1 = progress bar, 2 = one line per epoch
                                      callbacks=callback,               # keras.callbacks.Callback instances to apply during training
                                      validation_data=valGenerator,     # generator or tuple on which to evaluate the loss and any model metrics at the end of each epoch
                                      validation_steps=
                                      valGenerator.samples//nBatches,   # number of steps (batches of samples) to yield from validation_data generator before stopping at the end of every epoch
                                      class_weight=None,                # optional dictionary mapping class indices (integers) to a weight (float) value, used for weighting the loss function
                                      max_queue_size=10,                # maximum size for the generator queue
                                      workers=1,                        # maximum number of processes to spin up when using process-based threading
                                      use_multiprocessing=False,        # whether to use process-based threading
                                      shuffle=False,                    # whether to shuffle the order of the batches at the beginning of each epoch
                                      initial_epoch=0)                  # epoch at which to start training
    

    • I unified my python setup as suggested in the keras documentation on how to obtain reproducible results using Keras during development
    import tensorflow as tf
    import random as rn
    from keras import backend as K
    
    np.random.seed(42)
    rn.seed(12345)
    session_conf = tf.ConfigProto(intra_op_parallelism_threads=1,
                                  inter_op_parallelism_threads=1)
    tf.set_random_seed(1234)
    sess = tf.Session(graph=tf.get_default_graph(), config=session_conf)
    K.set_session(sess)
    

    • Instead of rescaling input images with datagen = ImageDataGenerator(rescale=1./255), I now generate my data with:
    from keras.applications.resnet50 import preprocess_input
    datagen = ImageDataGenerator(preprocessing_function=preprocess_input)
    

    With this, I managed to have a similar accuracy and loss from fit_generator() and evaluate_generator(). Also, using the same data for training and testing now results in a similar metrics. Reasons for remaining differences are provided in the keras documentation.

提交回复
热议问题