floating-accuracy

How to keep float/double arithmetic deterministic?

落花浮王杯 提交于 2021-01-27 14:34:55
问题 If we use algorithms with double and float arithmetic, how can we guarantee that the results are the same running it in Python and C, in x86 and x64 Linux and Windows computers and ARM microcontrollers? We re using an algorithm that uses: double + double double + float double exp(double) float * float On the same computer, compiling it for x86 and x64 MinGW gives different results. The algorithm makes a lot of math so any small error will make a difference in the end. Right now the ARM mcu

Exhausting floating point precision in a (seemingly) infinite loop

本小妞迷上赌 提交于 2021-01-27 06:40:09
问题 I've got the following Python script: x = 300000000.0 while (x < x + x): x = x + x print "exec: " + str(x) print "terminated" + str(x) This seemingly infinite loop, terminates pretty quickly if x is a floating point number. But if i change x to 300000000 instead, it gets into an infinite loop (runs longer than a minute in my test). I think this is to do with the fact that it's exhausting the precision of a floating point number that can be represented in memory. Can someone provide a more

Exhausting floating point precision in a (seemingly) infinite loop

こ雲淡風輕ζ 提交于 2021-01-27 06:39:40
问题 I've got the following Python script: x = 300000000.0 while (x < x + x): x = x + x print "exec: " + str(x) print "terminated" + str(x) This seemingly infinite loop, terminates pretty quickly if x is a floating point number. But if i change x to 300000000 instead, it gets into an infinite loop (runs longer than a minute in my test). I think this is to do with the fact that it's exhausting the precision of a floating point number that can be represented in memory. Can someone provide a more

Is floating point math broken?

折月煮酒 提交于 2021-01-20 13:53:52
问题 Consider the following code: 0.1 + 0.2 == 0.3 -> false 0.1 + 0.2 -> 0.30000000000000004 Why do these inaccuracies happen? 回答1: Binary floating point math is like this. In most programming languages, it is based on the IEEE 754 standard. The crux of the problem is that numbers are represented in this format as a whole number times a power of two; rational numbers (such as 0.1 , which is 1/10 ) whose denominator is not a power of two cannot be exactly represented. For 0.1 in the standard

Is floating point math broken?

邮差的信 提交于 2021-01-20 13:53:05
问题 Consider the following code: 0.1 + 0.2 == 0.3 -> false 0.1 + 0.2 -> 0.30000000000000004 Why do these inaccuracies happen? 回答1: Binary floating point math is like this. In most programming languages, it is based on the IEEE 754 standard. The crux of the problem is that numbers are represented in this format as a whole number times a power of two; rational numbers (such as 0.1 , which is 1/10 ) whose denominator is not a power of two cannot be exactly represented. For 0.1 in the standard

Ruby float precision

你说的曾经没有我的故事 提交于 2020-12-25 01:00:34
问题 As I understand it, Ruby (1.9.2) floats have a precision of 15 decimal digits. Therefore, I would expect rounding float x to 15 decimal places would equal x . For this calculation this isn't the case. x = (0.33 * 10) x == x.round(15) # => false Incidentally, rounding to 16 places returns true. Can you please explain this to me? 回答1: Part of the problem is that 0.33 does not have an exact representation in the underlying format, because it cannot be expressed by a series of 1 / 2 n terms. So,

Ruby float precision

半腔热情 提交于 2020-12-25 00:57:21
问题 As I understand it, Ruby (1.9.2) floats have a precision of 15 decimal digits. Therefore, I would expect rounding float x to 15 decimal places would equal x . For this calculation this isn't the case. x = (0.33 * 10) x == x.round(15) # => false Incidentally, rounding to 16 places returns true. Can you please explain this to me? 回答1: Part of the problem is that 0.33 does not have an exact representation in the underlying format, because it cannot be expressed by a series of 1 / 2 n terms. So,

Ruby float precision

天涯浪子 提交于 2020-12-25 00:57:05
问题 As I understand it, Ruby (1.9.2) floats have a precision of 15 decimal digits. Therefore, I would expect rounding float x to 15 decimal places would equal x . For this calculation this isn't the case. x = (0.33 * 10) x == x.round(15) # => false Incidentally, rounding to 16 places returns true. Can you please explain this to me? 回答1: Part of the problem is that 0.33 does not have an exact representation in the underlying format, because it cannot be expressed by a series of 1 / 2 n terms. So,