How can you load all batch data into GPU memory in Keras (Theano backend)?

痴心易碎 提交于 2019-12-07 02:04:06

问题


Keras loads data onto the GPU batch-by-batch (noted by the author here).

For small datasets, this is very inefficient. Is there a way to modify Keras or call Theano functions directly (after defining the model in Keras) to allow all batches to be moved to the GPU up front, and training done using the batches already in GPU memory?

(Someone asked the same question on the Keras list a few weeks ago, but has no replies so far.)


回答1:


Just hard-wire your data into the model as a non-trainable embedding matrix (Embedding layer with your custom initializer). Then instead of the training data you pass a bunch of indices to the model.fit method.



来源:https://stackoverflow.com/questions/38944174/how-can-you-load-all-batch-data-into-gpu-memory-in-keras-theano-backend

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!