Interpolate whole arrays of complex numbers

杀马特。学长 韩版系。学妹 提交于 2019-12-01 13:57:51

There are two ways of looking at complex numbers:

  1. Cartesian Form ( a + bi ) and
  2. Polar/Euler Form ( A * exp(i * phi) )

When you say you want to interpolate between two polar coordinates, do you want to interpolate with respect to the real/imaginary components (1), or with respect to the number's magnitude and phase (2)?

You CAN break things down into real and imaginary components,

X = 2 * 5j
X_real = np.real(X)
X_imag = np.imag(X)

# Interpolate the X_real and X_imag

# Reconstruct X
X2 = X_real + 1j * X_imag

However, With real-life applications that involve complex numbers, such as digital filter design, you quite often want to work with numbers in Polar/exponential form.

Therefore instead of interpolating the np.real() and np.imag() components, you may want to break it down into magnitude & phase using np.abs() and Angle or Arctan2, and interpolate separately. You might do this, for example, when trying to interpolate the Fourier Transform of a digital filter.

Y = 1+2j
mag = np.abs(Y)
phase = np.angle(Y)

The interpolated values can be converted back into complex (Cartesian) numbers using the Eulers formula

# Complex number
y = mag * np.exp( 1j * phase)

# Or if you want the real and imaginary complex components separately,
realPart, imagPart = mag * np.cos(phase) , mag * np.sin(phase)

Depending on what you're doing, this gives you some real flexibility with the interpolation methods you use.

I ended up working around the problem, but after learning a good deal more about response surfaces and the like, I now understand that this is a far-from-trivial problem. I could not have expected a simple solution in numpy, and the question would have probably been better placed in a forum on mathematics than on programming.

If I had to tackle such a task again, I'd probably use scikit-learn to try and establish either a co-Kriging interpolation for both components, or two separate Kriging (or more general, Gaussian Process) models which share a common set of model constants, optimized to minimize the combined error amplitude, (i.e.: Full model error square is the sum of both partial model errors)

-- but first I'd go and have a look if there aren't any useful papers on the topic already.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!