Is there a workaround for running out of memory on GPU with tensorflow?

若如初见. 提交于 2020-01-03 10:45:10

问题


I am currently building a 3d convolutional network, for video classification. The main problem is that I run out of memory too easily. Even if I set my batch_size to 1, there is still not enough memory to train my CNN the way I want.

I am using a GTX 970 with 4Gb of VRAM (3.2Gb free to use by tensorflow). I was expecting it to still train my network, maybe using my RAM memory as a backup, or doing the calculations in parts. But until now I could only run it making the CNN simpler, which affects performance directly.

I think I can run on CPU, but it is significantly slower, making it not a good solution either.

Is there a better solution than to buy a better GPU?

Thanks in advance.


回答1:


Using gradient checkpointing will help with memory limits.



来源:https://stackoverflow.com/questions/48466255/is-there-a-workaround-for-running-out-of-memory-on-gpu-with-tensorflow

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!