I have a program that contains a large number of objects, many of them Numpy arrays. My program is swapping miserably, and I\'m trying to reduce the memory usage, because it
Have a look at memory profiler. It provides line by line profiling and Ipython
integration, which makes it very easy to use it:
In [1]: import numpy as np
In [2]: %memit np.zeros(1e7)
maximum of 3: 70.847656 MB per loop
Update
As mentioned by @WickedGrey there seems to be a bug (see github issue tracker) when calling a function more than one time, which I can reproduce:
In [2]: for i in range(10):
...: %memit np.zeros(1e7)
...:
maximum of 1: 70.894531 MB per loop
maximum of 1: 70.894531 MB per loop
maximum of 1: 70.894531 MB per loop
maximum of 1: 70.894531 MB per loop
maximum of 1: 70.894531 MB per loop
maximum of 1: 70.894531 MB per loop
maximum of 1: 70.902344 MB per loop
maximum of 1: 70.902344 MB per loop
maximum of 1: 70.902344 MB per loop
maximum of 1: 70.902344 MB per loop
However I don't know to what extend the results maybe influenced (seems to be not that much in my example, so depending on your use case it maybe still useful) and when this issue maybe fixed. I asked that at github.