Finetuning VGG-16 on GPU in Keras: memory consumption

血红的双手。 提交于 2021-02-08 11:20:20

问题


I'm fine-tuning VGG-16 for my task. The idea is that I load the pretrained weights, remove the last layer (which is softmax with 1000 outputs) and replace it with a softmax with a few outputs. Then I freeze all the layers but the last and train the model.

Here is the code that builds the original model and loads the weights.

def VGG_16(weights_path=None):
    model = Sequential()
    model.add(ZeroPadding2D((1,1),input_shape=(224,224,3)))
    model.add(Conv2D(64, (3, 3), activation='relu'))
    model.add(ZeroPadding2D((1,1)))
    model.add(Conv2D(64, (3, 3), activation='relu'))
    model.add(MaxPooling2D((2,2), strides=(2,2)))

    model.add(ZeroPadding2D((1,1)))
    model.add(Conv2D(128, (3, 3), activation='relu'))
    model.add(ZeroPadding2D((1,1)))
    model.add(Conv2D(128, (3, 3), activation='relu'))
    model.add(MaxPooling2D((2,2), strides=(2,2)))

    model.add(ZeroPadding2D((1,1)))
    model.add(Conv2D(256, (3, 3), activation='relu'))
    model.add(ZeroPadding2D((1,1)))
    model.add(Conv2D(256, (3, 3), activation='relu'))
    model.add(ZeroPadding2D((1,1)))
    model.add(Conv2D(256, (3, 3), activation='relu'))
    model.add(MaxPooling2D((2,2), strides=(2,2)))

    model.add(ZeroPadding2D((1,1)))
    model.add(Conv2D(512, (3, 3), activation='relu'))
    model.add(ZeroPadding2D((1,1)))
    model.add(Conv2D(512, (3, 3), activation='relu'))
    model.add(ZeroPadding2D((1,1)))
    model.add(Conv2D(512, (3, 3), activation='relu'))
    model.add(MaxPooling2D((2,2), strides=(2,2)))

    model.add(ZeroPadding2D((1,1)))
    model.add(Conv2D(512, (3, 3), activation='relu'))
    model.add(ZeroPadding2D((1,1)))
    model.add(Conv2D(512, (3, 3), activation='relu'))
    model.add(ZeroPadding2D((1,1)))
    model.add(Conv2D(512, (3, 3), activation='relu'))
    model.add(MaxPooling2D((2,2), strides=(2,2)))

    model.add(Flatten())
    model.add(Dense(4096, activation='relu'))
    model.add(Dropout(0.5))
    model.add(Dense(4096, activation='relu'))
    model.add(Dropout(0.5))
    model.add(Dense(1000, activation='softmax'))

    if weights_path:
        model.load_weights(weights_path)

    return model

Keras uses Tensorflow as a backend in my case. Tensorflow is built to use GPU (using CUDA). I currently have a rather old card: GTX 760 with 2Gb of memory.

On my card I cannot even load the whole model (the code above) because of an out of memory error.

Here the author says that 4Gb is not enough as well.

Here GTX 1070 is able to even train VGG-16 (not just load it into memory), but only with some batch sizes and in different frameworks (not in Keras). It seems that GTX 1070 always have exactly 8Gb of memory.

So it seems that 4Gb is clearly not enough for fine-tuning VGG-16, and 8Gb may be enough.

And the question is: what amount of memory is enough to finetune VGG-16 with Keras+TF? Will 6Gb be enough, or 8Gb is minimum and ok, or something bigger is needed?


回答1:


I have finetuned VGG-16 in Tensorflow with a batch size of 32 (GPU: 8GB). I think this would be the same for your case as Keras uses Tensorflow. However, if you want to train with a larger batch size then you might need 12 or 16 GB GPU.



来源:https://stackoverflow.com/questions/50899502/finetuning-vgg-16-on-gpu-in-keras-memory-consumption

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!