numerical

Does Python have a function to reduce fractions?

爱⌒轻易说出口 提交于 2019-12-22 01:33:21
问题 For example, when I calculate 98/42 I want to get 7/3 , not 2.3333333 , is there a function for that using Python or Numpy ? 回答1: The fractions module can do that >>> from fractions import Fraction >>> Fraction(98, 42) Fraction(7, 3) There's a recipe over here for a numpy gcd. Which you could then use to divide your fraction >>> def numpy_gcd(a, b): ... a, b = np.broadcast_arrays(a, b) ... a = a.copy() ... b = b.copy() ... pos = np.nonzero(b)[0] ... while len(pos) > 0: ... b2 = b[pos] ... a

Decimal accuracy of binary floating point numbers

情到浓时终转凉″ 提交于 2019-12-21 20:55:30
问题 I've found this problem in many interview exams, but don't see how to work out the proper solution myself. The problem is: How many digits of accuracy can be represented by a floating point number represented by two 16-bit words? The solution is apparently approximately 6 digits. Where does this come from, and how would you work it out? 回答1: It's quite simple: a 32 bit IEEE-754 float has 23+1 bits for the mantissa (AKA significand, in IEEE-speak). The size of the mantissa more or less

logsumexp implementation in C?

拟墨画扇 提交于 2019-12-21 09:02:38
问题 Does anybody know of an open source numerical C library that provides the logsumexp -function? The logsumexp(a) function computes the sum of exponentials log(e^{a_1}+...e^{a_n}) of the components of the array a, avoiding numerical overflow. 回答1: Here's a very simple implementation from scratch (tested, at least minimally): double logsumexp(double nums[], size_t ct) { double max_exp = nums[0], sum = 0.0; size_t i; for (i = 1 ; i < ct ; i++) if (nums[i] > max_exp) max_exp = nums[i]; for (i = 0;

When to use Fixed Point these days

允我心安 提交于 2019-12-21 06:55:48
问题 For intense number-crunching i'm considering using fixed point instead of floating point. Of course it'll matter how many bytes the fixed point type is in size, on what CPU it'll be running on, if i can use (for Intel) the MMX or SSE or whatever new things come up... I'm wondering if these days when floating point runs faster than ever, is it ever worth considering fixed point? Are there general rules of thumb where we can say it'll matter by more than a few percent? What is the overview from

When to use Fixed Point these days

喜你入骨 提交于 2019-12-21 06:53:44
问题 For intense number-crunching i'm considering using fixed point instead of floating point. Of course it'll matter how many bytes the fixed point type is in size, on what CPU it'll be running on, if i can use (for Intel) the MMX or SSE or whatever new things come up... I'm wondering if these days when floating point runs faster than ever, is it ever worth considering fixed point? Are there general rules of thumb where we can say it'll matter by more than a few percent? What is the overview from

QP solver for Java [closed]

烂漫一生 提交于 2019-12-21 01:58:09
问题 As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance. Closed 8 years ago . I'm looking for a good easy to use Java based Quadratic Programming (QP) solver. Googling around I came across ojAlgo (http://ojalgo

Solving nonlinear equations numerically

主宰稳场 提交于 2019-12-20 20:38:19
问题 I need to solve nonlinear minimization (least residual squares of N unknowns) problems in my Java program. The usual way to solve these is the Levenberg-Marquardt algorithm. I have a couple of questions Does anybody have experience on the different LM implementations available? There exist slightly different flavors of LM, and I've heard that the exact implementation of the algorithm has a major effect on the its numerical stability. My functions are pretty well-behaved so this will probably

What's a good way to add a large number of small floats together?

☆樱花仙子☆ 提交于 2019-12-20 12:31:42
问题 Say you have 100000000 32-bit floating point values in an array, and each of these floats has a value between 0.0 and 1.0. If you tried to sum them all up like this result = 0.0; for (i = 0; i < 100000000; i++) { result += array[i]; } you'd run into problems as result gets much larger than 1.0. So what are some of the ways to more accurately perform the summation? 回答1: Sounds like you want to use Kahan Summation. According to Wikipedia, The Kahan summation algorithm (also known as compensated

How do I approximate the Jacobian and Hessian of a function numerically?

*爱你&永不变心* 提交于 2019-12-20 10:05:47
问题 I have a function in Python: def f(x): return x[0]**3 + x[1]**2 + 7 # Actually more than this. # No analytical expression It's a scalar valued function of a vector. How can I approximate the Jacobian and Hessian of this function in numpy or scipy numerically? 回答1: (Updated in late 2017 because there's been a lot of updates in this space.) Your best bet is probably automatic differentiation. There are now many packages for this, because it's the standard approach in deep learning: Autograd

Bisection method (Numerical analysis)

主宰稳场 提交于 2019-12-20 04:13:38
问题 How many recursions are made before every single root is found? Also, which ones are the roots? Here's my code: e=0.000001; f1=@(x) 14.*x.*exp(x-2)-12.*exp(x-2)-7.*x.^3+20.*x.^2-26.*x+12; a=0; c=3; while abs(c-a)>e b=(c+a)/2; if f1(a)*f1(b)<0 c=b; else a=b; end disp(b); end 回答1: Bisection works by taking endpoints of some initial interval [a,b] and finding which half of the interval must contain the root (it evaluates the midpoint, and identifies which half has the sign change). Then