Training a Keras model on multiple feature files that are read in sequentially to save memory

烂漫一生 提交于 2019-12-11 05:55:31

问题


I'm running into memory issues when trying to read in massive feature files (see below). I figured I'd split the training files and read them in sequentially. What is the best approach to do that?

x_train = np.load(path_features + 'x_train.npy)
y_train = np.load(path_features + 'y_train.npy)
x_test = np.load(path_features + 'x_test.npy)
y_test = np.load(path_features + 'y_test.npy)

path_models = '../pipelines/' + pipeline + '/models/'

# global params
verbose_level = 1
inp_shape = x_train.shape[1:]

# models
if model_type == 'standard_4':
    print('Starting to train ' + feature_type + '_' + model_type + '.')
    num_classes = 1
    dropout_prob = 0.5
    activation_function = 'relu'
    loss_function = 'binary_crossentropy'
    batch_size = 32
    epoch_count = 100
    opt = SGD(lr=0.001)

    model = Sequential()
    model.add(Conv2D(filters=16, kernel_size=(3, 3), input_shape=inp_shape))
    model.add(MaxPooling2D(pool_size=(2, 2)))
    model.add(Conv2D(filters=32, kernel_size=(3, 3)))
    model.add(MaxPooling2D(pool_size=(2, 2)))
    model.add(Flatten())
    model.add(Dense(64, activation=activation_function))
    model.add(Dropout(rate=dropout_prob))
    model.add(Dense(32, activation=activation_function))
    model.add(Dense(num_classes, activation='sigmoid'))
    model.summary()
    model.compile(loss=loss_function, optimizer=opt, metrics=['accuracy'])
    hist = model.fit(x_train, y_train, batch_size=batch_size, epochs=epoch_count,
                     verbose=verbose_level,
                     validation_data=(x_test, y_test))

    model.save(path_models + category + '_' + feature_type + '_' + model_type + '.h5')
    print('Finished training ' + model_type + '.')

    plot_model(hist, path_models, category, feature_type, model_type)
    print('Saved model charts.')

回答1:


You can either use a python generator or a keras sequence.

The generator should yield your batches indefinitely:

def myReader(trainOrTest):
    while True:
        do something to define path_features

        x = np.load(path_features + 'x_' + trainOrTest + '.npy')
        y = np.load(path_features + 'y_' + trainOrTest + '.npy')

        #if you're loading them already in a shape accepted by your model:
        yield (x,y)

You can then use fit_generator to train and predict_generator to predict values:

model.fit_generator(myReader(trainOrTest),steps_per_epoch=howManyFiles,epochs=.......)


来源:https://stackoverflow.com/questions/46229966/training-a-keras-model-on-multiple-feature-files-that-are-read-in-sequentially-t

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!