Is there a way to reduce scipy/numpy precision to reduce memory consumption?

|▌冷眼眸甩不掉的悲伤 提交于 2019-12-04 05:48:14

It doesn't look like there's any function to do this in scipy's fft functions ( see http://www.astro.rug.nl/efidad/scipy.fftpack.basic.html ).

Unless you're able to find a fixed point FFT library for python, it's unlikely that the function you want exists, since your native hardware floating point format is 128 bits. It does look like you could use the rfft method to get just the real-valued components (no phase) of the FFT, and that would save half your RAM.

I ran the following in interactive python:

>>> from numpy import *
>>>  v = array(10000*random.random([512,512,512]),dtype=int16)
>>> shape(v)
(512, 512, 512)
>>> type(v[0,0,0])
<type 'numpy.int16'>

At this point the RSS (Resident Set Size) of python was 265MB.

f = fft.fft(v)

And at this point the RSS of python 2.3GB.

>>> type(f)
<type 'numpy.ndarray'>
>>> type(f[0,0,0]) 
<type 'numpy.complex128'>
>>> v = []

And at this point the RSS goes down to 2.0GB, since I've free'd up v.

Using "fft.rfft(v)" to compute real-values only results in a 1.3GB RSS. (almost half, as expected)

Doing:

>>> f = complex64(fft.fft(v))

Is the worst of both worlds, since it first computes the complex128 version (2.3GB) and then copies that into the complex64 version (1.3GB) which means the peak RSS on my machine was 3.6GB, and then it settled down to 1.3GB again.

I think that if you've got 4GB RAM, this should all work just fine (as it does for me). What's the issue?

Scipy 0.8 will have single precision support for almost all the fft code (The code is already in the trunk, so you can install scipy from svn if you need the feature now).

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!