In Pytorch, I want to allocate all gpu-memory like tensorflow
But Only part of the whole was allocated
How can i handle it.