floating-accuracy

SQL server 2005 numeric precision loss

爷,独闯天下 提交于 2019-11-27 07:03:09
问题 Debugging some finance-related SQL code found a strange issue with numeric(24,8) mathematics precision. Running the following query on your MSSQL you would get A + B * C expression result to be 0.123457 SELECT A, B, C, A + B * C FROM ( SELECT CAST(0.12345678 AS NUMERIC(24,8)) AS A, CAST(0 AS NUMERIC(24,8)) AS B, CAST(500 AS NUMERIC(24,8)) AS C ) T So we have lost 2 significant symbols. Trying to get this fixed in different ways i got that conversion of the intermediate multiplication result

Understanding floating point representation errors; what's wrong with my thinking?

醉酒当歌 提交于 2019-11-27 07:00:31
问题 I'm having some trouble understanding why some figures can't be represented with floating point number. As we know, a normal float would have sign bit, exponent, and mantissa. Why can't, for example, 0.1 be represented accurately in this system; the way I think of it would be that you put 10 (1010 in bin) to mantissa and -2 to the exponent. As far as I know, both numbers can be accurately represented in the mantissa and exponent. So why can't we represent 0.1 accurately? 回答1: If your exponent

ruby: converting from float to integer in ruby produces strange results

怎甘沉沦 提交于 2019-11-27 06:28:48
问题 ree-1.8.7-2010.02 :003 > (10015.8*100.0).to_i => 1001579 ree-1.8.7-2010.02 :004 > 10015.8*100.0 => 1001580.0 ree-1.8.7-2010.02 :005 > 1001580.0.to_i => 1001580 ruby 1.8.7 produces the same. Does anybody knows how to eradicate this heresy? =) 回答1: Actually, all of this make sense. Because 0.8 cannot be represented exactly by any series of 1 / 2 ** x for various x , it must be represented approximately, and it happens that this is slightly less than 10015.8. So, when you just print it, it is

Why does ghci say that 1.1 + 1.1 + 1.1 > 3.3 is True?

生来就可爱ヽ(ⅴ<●) 提交于 2019-11-27 06:11:09
问题 I've been going through a Haskell tutorial recently and noticed this behaviour when trying some simple Haskell expressions in the interactive ghci shell: Prelude> 1.1 + 1.1 == 2.2 True Prelude> 1.1 + 1.1 + 1.1 == 3.3 False Prelude> 1.1 + 1.1 + 1.1 > 3.3 True Prelude> 1.1 + 1.1 + 1.1 3.3000000000000003 Does anybody know why that is? 回答1: Because 1.1 and 3.3 are floating point numbers. Decimal fractions, such as .1 or .3, are not exactly representable in a binary floating point number. .1 means

How to get bc to handle numbers in scientific (aka exponential) notation?

本小妞迷上赌 提交于 2019-11-27 05:41:49
问题 bc doesn't like numbers expressed in scientific notation (aka exponential notation). $ echo "3.1e1*2" | bc -l (standard_in) 1: parse error but I need to use it to handle a few records that are expressed in this notation. Is there a way to get bc to understand exponential notation? If not, what can I do to translate them into a format that bc will understand? 回答1: Unfortunately, bc doesn't support scientific notation. However, it can be translated into a format that bc can handle, using

Getting the decimal part of a double in Swift

那年仲夏 提交于 2019-11-27 05:17:31
I'm trying to separate the decimal and integer parts of a double in swift. I've tried a number of approaches but they all run into the same issue... let x:Double = 1234.5678 let n1:Double = x % 1.0 // n1 = 0.567800000000034 let n2:Double = x - 1234.0 // same result let n3:Double = modf(x, &integer) // same result Is there a way to get 0.5678 instead of 0.567800000000034 without converting to the number to a string? Without converting it to a string, you can round up to a number of decimal places like this: let x:Double = 1234.5678 let numberOfPlaces:Double = 4.0 let powerOfTen:Double = pow(10

Precise sum of floating point numbers

纵然是瞬间 提交于 2019-11-27 04:37:23
问题 I am aware of a similar question, but I want to ask for people opinion on my algorithm to sum floating point numbers as accurately as possible with practical costs. Here is my first solution: put all numbers into a min-absolute-heap. // EDIT as told by comments below pop the 2 smallest ones. add them. put the result back into the heap. continue until there is only 1 number in the heap. This one would take O(n*logn) instead of normal O(n). Is that really worth it? The second solution comes

Fast Exp calculation: possible to improve accuracy without losing too much performance?

给你一囗甜甜゛ 提交于 2019-11-27 04:19:16
问题 I am trying out the fast Exp(x) function that previously was described in this answer to an SO question on improving calculation speed in C#: public static double Exp(double x) { var tmp = (long)(1512775 * x + 1072632447); return BitConverter.Int64BitsToDouble(tmp << 32); } The expression is using some IEEE floating point "tricks" and is primarily intended for use in neural sets. The function is approximately 5 times faster than the regular Math.Exp(x) function. Unfortunately, the numeric

What is the difference between these two comparisons? [duplicate]

不问归期 提交于 2019-11-27 01:41:38
问题 Possible Duplicate: Why are these numbers not equal? 0.9 == 1-0.1 >>> TRUE 0.9 == 1.1-0.2 >>> FALSE 回答1: Answer to fix your program: > all.equal(0.9,1.1-0.2) [1] TRUE > all.equal(0.9, 1.1-0.3) [1] "Mean relative difference: 0.1111111" > isTRUE(all.equal(0.9, 1.1-0.3) [1] FALSE and if used in code: if(isTRUE(all.equal(0.9,1.1-0.2)) { .... } or in vectors: > vec1=0.9 > vec2=c(1.1-0.2,1.3-0.4,1.0-0.2) > mapply(function(...)isTRUE(all.equal(...)),vec1, vec2) [1] TRUE TRUE FALSE Answer for

Precision lost while using read_csv in pandas

拈花ヽ惹草 提交于 2019-11-26 21:20:36
问题 I have files of the below format in a text file which I am trying to read into a pandas dataframe. 895|2015-4-23|19|10000|LA|0.4677978806|0.4773469340|0.4089938425|0.8224291972|0.8652525793|0.6829942860|0.5139162227| As you can see there are 10 integers after the floating point in the input file. df = pd.read_csv('mockup.txt',header=None,delimiter='|') When I try to read it into dataframe, I am not getting the last 4 integers df[5].head() 0 0.467798 1 0.258165 2 0.860384 3 0.803388 4 0.249820