strange numpy fft performance

前端 未结 3 2199
悲哀的现实
悲哀的现实 2021-01-04 18:27

During testing I have noticed something strange.

I’m FFT’ing a lot of vectors, and from time to time the numpy FFT function seemed to crash.

I briefly debu

相关标签:
3条回答
  • 2021-01-04 19:11

    Divide-and-conquer FFT algorithms, such as Cooley-Tukey, work much better the more factors the input length has. Powers of 2 work especially well, whereas primes (like 165037) require alternate, slower implementations. If you can pad your input to a power-of-2 length, you may be able to drastically speed up slow FFTs.

    0 讨论(0)
  • 2021-01-04 19:17

    I think that power 2 padding of array some times has several disadvantages:

    1. If you cut array to power 2 length it will produce a big loses of the data for big arrays
    2. If you pad array with zeros it will produce so called "edge effect"

    I have found in this topic that fft performance depends on the array size prime factorization. If the array length is prime number it lead to long calculation time. So I propose following code that decreases array length looking for best it factorization.

    from sympy.ntheory import factorint
    
    FACTOR_LIMIT = 100
    
    def bestFFTlength(n):
        while max(factorint(n)) >= FACTOR_LIMIT:
            n -= 1
        return n
    
    a = np.zeros(166400)
    audio_fft = np.fft.fft(a,bestFFTlength(len(a)))
    
    0 讨论(0)
  • 2021-01-04 19:24

    A simple way to solve this - that isn't mentioned in the other answers - is to use scipy.fftpack.next_fast_len to pad the array. Given a target length, it gives you the next length > target that is a composite of 2, 3 and 5.

    As the other answers have pointed out, FFT performs the worst when the array's length is a prime number. Its efficiency increases as the number of prime factors increase (2, 3, 5, etc.).

    0 讨论(0)
提交回复
热议问题