numerical-stability

What are the constraints on the divisor argument of scipy.signal.deconvolve to ensure numerical stability?

别来无恙 提交于 2019-12-21 20:36:45
问题 Here is my problem: I am going to process data coming from a system for which I will have a good idea of the impulse response. Having used Python for some basic scripting before, I am getting to know the scipy.signal.convolve and scipy.signal.deconvolve functions. In order to get some confidence in my final solution, I would like to understand their requirements and limitations. I used the following test: 1. I built a basic signal made of two Gaussians. 2. I built a Gaussian impulse response.

numerically stable inverse of a 2x2 matrix

折月煮酒 提交于 2019-12-21 18:29:49
问题 In a numerical solver I am working on in C, I need to invert a 2x2 matrix and it then gets multiplied on the right side by another matrix: C = B . inv(A) I have been using the following definition of an inverted 2x2 matrix: a = A[0][0]; b = A[0][1]; c = A[1][0]; d = A[1][1]; invA[0][0] = d/(a*d-b*c); invA[0][1] = -b/(a*d-b*c); invA[1][0] = -c/(a*d-b*c); invA[1][1] = a/(a*d-b*c); In the first few iterations of my solver this seems to give the correct answers, however, after a few steps things

Avoiding numerical overflow when calculating the value AND gradient of the Logistic loss function

╄→гoц情女王★ 提交于 2019-12-21 16:53:31
问题 I am currently trying to implement a machine learning algorithm that involves the logistic loss function in MATLAB. Unfortunately, I am having some trouble due to numerical overflow. In general, for a given an input s , the value of the logistic function is: log(1 + exp(s)) and the slope of the logistic loss function is: exp(s)./(1 + exp(s)) = 1./(1 + exp(-s)) In my algorithm, the value of s = X*beta . Here X is a matrix with N data points and P features per data point (i.e. size(X)=[N,P] )

How can I avoid value errors when using numpy.random.multinomial?

六眼飞鱼酱① 提交于 2019-12-19 07:52:09
问题 When I use this random generator: numpy.random.multinomial, I keep getting: ValueError: sum(pvals[:-1]) > 1.0 I am always passing the output of this softmax function: def softmax(w, t = 1.0): e = numpy.exp(numpy.array(w) / t) dist = e / np.sum(e) return dist except now that I am getting this error, I also added this for the parameter ( pvals ): while numpy.sum(pvals) > 1: pvals /= (1+1e-5) but that didn't solve it. What is the right way to make sure I avoid this error? EDIT: here is function

Symmetrical Lerp & compiler optimizations

感情迁移 提交于 2019-12-12 21:31:13
问题 I had a function: float lerp(float alpha, float x0, float x1) { return (1.0f - alpha) * x0 + alpha * x1; } For those who haven't seen it, this is preferable to x0 + (x1-x0) * alpha because the latter doesn't guarantee that lerp(1.0f, x0, x1) == x1 . Now, I want my lerp function to have an additional property: I'd like lerp(alpha, x0, x1) == lerp(1-alpha, x1, x0) . (As for why: this is a toy example of a more complicated function.) The solution I came up with that seems to work is float lerp

Getting y from x co-ord for cubic bezier curve, fast Newton-Raphson method

老子叫甜甜 提交于 2019-12-12 05:36:19
问题 Given the points of a Bezier curve (P0, P1, P2, P3) in 2D, I would like to find the y co-ordinate for a given x co-ordinate. The problem is well defined because of the following restrictions: P0 = (0,0), P3 = (1,1) P1 = (t, 1-t) for t between 0, 1 P2 = 1 - P1 (x and y) I have the following function to calculate the answer, having put in all the restrictions above into the Bezier curve formula here CubicBezier.html. I am using Newton-Raphson to work out the parameter of the point I want, and I

Converting float to UInt32 - which expression is more precise

邮差的信 提交于 2019-12-10 17:38:41
问题 I have a number float x which should be in <0,1> range but it undergo several numerical operations - the result may be slightly outside <0,1>. I need to convert this result to uint y using entire range of UInt32 . Of course, I need to clamp x in the <0,1> range and scale it. But which order of operations is better? y = (uint)round(min(max(x, 0.0F), 1.0F) * UInt32.MaxValue) or y = (uint)round(min(max(x * UInt32.MaxValue, 0.0F), UInt32.MaxValue) In another words, it is better to scale first,

Interpreting error from computing Jordan form of 36-by-36 matrix

廉价感情. 提交于 2019-12-10 13:02:32
问题 I've been trying to compute the jordan normal form of a 36-by-36 matrix composed of only three distinct entries, 1 , 1/2 , and 0 . The matrix is a probability transition matrix so, given these entries, the matrix is obviously sparse. The issue I've been having is the following: whenever I try to compute [V, J] = jordan(A), or [V, J] = jordan(sym(A)), I get the following error message: Error using mupadmex Error in MuPAD command: Similarity matrix too large. Error in sym/mupadmexnout (line

What are the constraints on the divisor argument of scipy.signal.deconvolve to ensure numerical stability?

不打扰是莪最后的温柔 提交于 2019-12-04 14:31:47
Here is my problem: I am going to process data coming from a system for which I will have a good idea of the impulse response. Having used Python for some basic scripting before, I am getting to know the scipy.signal.convolve and scipy.signal.deconvolve functions. In order to get some confidence in my final solution, I would like to understand their requirements and limitations. I used the following test: 1. I built a basic signal made of two Gaussians. 2. I built a Gaussian impulse response. 3. I convolved my initial signal with this impulse response. 4. I deconvolved this convolved signal. 5

numerically stable inverse of a 2x2 matrix

[亡魂溺海] 提交于 2019-12-04 10:13:30
In a numerical solver I am working on in C, I need to invert a 2x2 matrix and it then gets multiplied on the right side by another matrix: C = B . inv(A) I have been using the following definition of an inverted 2x2 matrix: a = A[0][0]; b = A[0][1]; c = A[1][0]; d = A[1][1]; invA[0][0] = d/(a*d-b*c); invA[0][1] = -b/(a*d-b*c); invA[1][0] = -c/(a*d-b*c); invA[1][1] = a/(a*d-b*c); In the first few iterations of my solver this seems to give the correct answers, however, after a few steps things start to grow and eventually explode. Now, comparing to an implementation using SciPy, I found that the