floating-accuracy

Floating point less-than-equal comparisons after addition and substraction

℡╲_俬逩灬. 提交于 2019-12-17 10:06:23
问题 Is there a "best practice" for less-than-equal comparisons with floating point number after a series of floating-point arithmetic operations? I have the following example in R (although the question applies to any language using floating-point). I have a double x = 1 on which I apply a series of additions and subtractions. In the end x should be exactly one but is not due to floating-point arithmetic (from what I gather). Here is the example: > stop_times <- seq(0.25, 2, by = .25) > expr <-

C++ floating point precision [duplicate]

拈花ヽ惹草 提交于 2019-12-17 02:26:25
问题 This question already has answers here : Closed 9 years ago . Possible Duplicate: Floating point inaccuracy examples double a = 0.3; std::cout.precision(20); std::cout << a << std::endl; result: 0.2999999999999999889 double a, b; a = 0.3; b = 0; for (char i = 1; i <= 50; i++) { b = b + a; }; std::cout.precision(20); std::cout << b << std::endl; result: 15.000000000000014211 So.. 'a' is smaller than it should be. But if we take 'a' 50 times - result will be bigger than it should be. Why is

PHP - Floating Number Precision [duplicate]

匆匆过客 提交于 2019-12-16 19:47:07
问题 This question already has answers here : Is floating point math broken? (31 answers) Closed 2 years ago . $a = '35'; $b = '-34.99'; echo ($a + $b); Results in 0.009999999999998 What is up with that? I wondered why my program kept reporting odd results. Why doesn't PHP return the expected 0.01? 回答1: Because floating point arithmetic != real number arithmetic. An illustration of the difference due to imprecision is, for some floats a and b , (a+b)-b != a . This applies to any language using

How accurate/precise is java.lang.Math.pow(x, n) for large n?

家住魔仙堡 提交于 2019-12-14 02:18:25
问题 I would like to calculate (1.0-p)^n where p is a double between 0 and 1 (often very close to 0) and n is a positive integer that might be on the order of hundreds or thousands (perhaps larger; I'm not sure yet). If possible I would love to just use Java's built in java.lang.Math.pow(1.0-p, n) for this, but I'm slightly concerned that there might be a gigantic loss of accuracy/precision in doing so with the range of values that I'm interested in. Does anybody have a rough idea of what kind of

Is floating point math broken?

别等时光非礼了梦想. 提交于 2019-12-14 00:36:03
问题 Consider the following code: 0.1 + 0.2 == 0.3 -> false 0.1 + 0.2 -> 0.30000000000000004 Why do these inaccuracies happen? 回答1: Binary floating point math is like this. In most programming languages, it is based on the IEEE 754 standard. The crux of the problem is that numbers are represented in this format as a whole number times a power of two; rational numbers (such as 0.1 , which is 1/10 ) whose denominator is not a power of two cannot be exactly represented. For 0.1 in the standard

Python representation of floating point numbers [duplicate]

谁说我不能喝 提交于 2019-12-14 00:10:19
问题 This question already has answers here : Floating Point Limitations (3 answers) Closed 6 years ago . I spent an hour today trying to figure out why return abs(val-desired) <= 0.1 was occasionally returning False , despite val and desired having an absolute difference of <=0.1 . After some debugging, I found out that -13.2 + 13.3 = 0.10000000000000142 . Now I understand that CPUs cannot easily represent most real numbers, but this is an exception, because you can subtract 0.00000000000000142

Why the result of '0.3 * 3' is 0.89999999999999 in scala language? [duplicate]

自闭症网瘾萝莉.ら 提交于 2019-12-13 11:27:49
问题 This question already has answers here : Floating point error in representation? (2 answers) Closed 6 years ago . Why the result of multiplication 0.3 and 3 is 0.89999999999999 in Scala? 回答1: Floating point calculation is reasonably complex subject. This has to do with the binary representation of a floating point number, that doesn't guarantee every possible number (obviously), to have an exact representation, which can lead to errors in operations, and yes, these errors can propagate. Here

strange arithmetic with swi-prolog [duplicate]

不羁的心 提交于 2019-12-13 08:05:45
问题 This question already has answers here : Is floating point math broken? (31 answers) Closed 5 years ago . I find the result very strange. Why not 0.3? Can somebody tell me why this result? Is it possible to fix this. ?- X is 5.3-5. X = 0.2999999999999998. ?- My second question is how would I transform from 'hour' notation '13.45' ---->'15.30' into numbers of hours ? For example the period above calculated 15.30-13.45 would be 1.85. But I need to operate on parts of the hour and not the

Is floating point math broken?

穿精又带淫゛_ 提交于 2019-12-13 07:45:45
问题 Consider the following code: 0.1 + 0.2 == 0.3 -> false 0.1 + 0.2 -> 0.30000000000000004 Why do these inaccuracies happen? 回答1: Binary floating point math is like this. In most programming languages, it is based on the IEEE 754 standard. The crux of the problem is that numbers are represented in this format as a whole number times a power of two; rational numbers (such as 0.1 , which is 1/10 ) whose denominator is not a power of two cannot be exactly represented. For 0.1 in the standard

C float and double comparisons

人盡茶涼 提交于 2019-12-13 07:19:04
问题 I'm comparing simple floats and doubles in C, specifically the value 8.7 for both of them. Now I assign 8.7 to each variable, when I print I get a result of 8.7000 for both values. Why has the compiler added these zeros. And the main question I wanted to ask was is there any further numbers that I'm not seeing, as in hidden after the trailing zeros. I read that I shouldn't do comparisons like this with float because of a lack of precision, but I thought with such a small value surely it can