double-precision

Why is JavaScript's number *display* for large numbers inaccurate?

独自空忆成欢 提交于 2019-12-13 15:12:11
问题 So in JavaScript, 111111111111111111111 == 111111111111111110000. Just type any long number – at least about 17 digits – to see it in action ;-) That is because JavaScript uses double-precision floating-point numbers, and certain very long numeric literals can not be expressed accurately. Instead, those numbers get rounded to the nearest representable number possible. See e.g. What is JavaScript's highest integer value that a Number can go to without losing precision? However, doing the math

Precision of calculations [duplicate]

笑着哭i 提交于 2019-12-13 10:43:22
问题 This question already has an answer here : How to obtain Fortran precision in MatLAB (1 answer) Closed 4 years ago . I am doing a calculation in Fortran on a double-precision variable, and after the calculation the variable gets the value -7.217301636365630e-24 . However, when I do the same computation in Matlab, the variable just gets the value 0 . Is there a way to increase the precision of MatLAB when doing calculations such that I would also be able to get something on the order of 7e-24

Data type mismatch in fortran

半城伤御伤魂 提交于 2019-12-13 02:55:25
问题 I've written a rudimentary algorithm in Fortran 95 to calculate the gradient of a function (an example of which is prescribed in the code) using central differences augmented with a procedure known as Richardson extrapolation. function f(n,x) ! The scalar multivariable function to be differentiated integer :: n real(kind = kind(1d0)) :: x(n), f f = x(1)**5.d0 + cos(x(2)) + log(x(3)) - sqrt(x(4)) end function f !=====! !=====! !=====! program gradient !=========================================

How to use fmod and avoid precision issues

我怕爱的太早我们不能终老 提交于 2019-12-13 01:57:08
问题 I'm going to boil this problem down to the simplest form: Let's iterate from [0 .. 5.0] with a step of 0.05 and print out 'X' for every 0.25 multiplier. for(double d=0.0; d<=5.0; d+=0.05) { if(fmod(d,0.25) is equal 0) print 'X'; } This will of course not work since d will be [0, 0.05000000001, 0.100000000002, ...] causing fmod() to fail. Extreme example is when d=1.999999999998 and fmod(d,0.25) = 1 . How to tackle this? Here is an editable online example. 回答1: I'd solve this by simply not

Java: Trigonometry and double inaccuracy causing NaN

故事扮演 提交于 2019-12-12 23:47:39
问题 I have a distance formula using latitude and longitude: distance = EARTH_MILES_RADIUS * Math.acos(Math.sin(lat1 / RADIAN_CONV) * Math.sin(lat2 / RADIAN_CONV) + Math.cos(lat1 / RADIAN_CONV) * Math.cos(lat2 / RADIAN_CONV) * Math.cos((lng2 - lng1) / RADIAN_CONV)); lat1,lng1,lat2,lng2 are double primitives. They come to me as double primitives and there is nothing I can do about it. The problem is that when I have a pair of longitude or latitudes that are the same the formula sometimes returns

Calculating Markov chain probabilities with values too large to exponentiate

[亡魂溺海] 提交于 2019-12-12 18:13:22
问题 I use the formula exp(X) as the rate for a markov chain. So the ratio of selecting one link over another is exp(X1)/exp(X2). My problem is that sometimes X is very large, so exp(X) will exceed the range of double . Alternatively: Given an array of X[i], with some X[i] so large that exp(X[i]) overflows the range of double , calculate, for each i, exp(X[i]) / S, where S is the sum of all the exp(X[i]). 回答1: This pseudo-code should work: Let M = the largest X[i]. For each i: Subtract M from X[i]

GCD algorithms for a large integers

可紊 提交于 2019-12-12 08:56:27
问题 I looking for the information about fast GCD computation algorithms. Especially, I would like to take a look at the realizations of that. The most interesting for me: - Lehmer GCD algorithm, - Accelerated GCD algorithm, - k-ary algorithm, - Knuth-Schonhage with FFT. I have completely NO information about the accelerated GCD algorithm, I just have seen a few articles where it was mentioned as the most effective and fast gcd computation method on the medium inputs (~1000 bits) They looks much

What is the precision of std::erf?

两盒软妹~` 提交于 2019-12-11 02:32:00
问题 C++11 introduced very useful math functions in the standard like erf and erfc. There are mentions about "guaranteed underflow" for inputs greater or smaller than certain values, but I don't know enough about floating point representation to understand clearly what this means in terms of precision. If this question makes sense; what precision (order of magnitude at least) can I expect from the approximation implemented by the standard library (if it is specified)? 回答1: This depends on the

double precision error when converting to scientific notation

不羁岁月 提交于 2019-12-11 01:53:22
问题 I'm building a program to to convert double values in to scientific value format(mantissa, exponent). Then I noticed the below 369.7900000000000 -> 3.6978999999999997428 68600000 -> 6.8599999999999994316 I noticed the same pattern for several other values also. The maximum fractional error is 0.000 000 000 000 001 = 1*e-15 I know the inaccuracy in representing double values in a computer. Can this be concluded that the maximum fractional error we would get is 1*e-15 ? What is significant

why is there significant double precision difference between Matlab and Mathematica?

我与影子孤独终老i 提交于 2019-12-11 00:07:30
问题 I created a random double precision value in Matlab by x = rand(1,1); then display all possible digits of x by vpa(x,100) and obtain: 0.2238119394911369 7971853298440692014992237091064453125 I save x to a .mat file, and import it into Mathematica, and then convert it: y = N[FromDigits[RealDigits[x]],100] and obtain: 0.2238119394911369 0000 Then go back to Matlab and use (copy and paste all the Mathematica digits to Matlab): vpa(0.22381193949113690000,100) and obtain: 0.22381193949113689