floating-accuracy

How to get around rounding issues in floating point arithmetic in C++?

倖福魔咒の 提交于 2019-12-06 11:51:32
Im running into some issues with floating point arithmetic not being accurate. I'm trying to calculate a score based on a weighted formula where every input variable weighs about as much as 20 times the next significant one. The inputs however are real numbers, so I ended up using a double to store the result. The code below has the problem of losing the difference between E1 and E2. This code is performance sensitive, so I need to find an efficient answer to this problem. I thought of multiplying my inputs by a hundred and then using an int (since that would be precise enough I think), but I

Understanding pandas.read_csv() float parsing

给你一囗甜甜゛ 提交于 2019-12-06 11:18:36
I am having problems reading probabilities from CSV using pandas.read_csv ; some of the values are read as floats with > 1.0 . Specifically, I am confused about the following behavior: >>> pandas.read_csv(io.StringIO("column\n0.99999999999999998"))["column"][0] 1.0 >>> pandas.read_csv(io.StringIO("column\n0.99999999999999999"))["column"][0] 1.0000000000000002 >>> pandas.read_csv(io.StringIO("column\n1.00000000000000000"))["column"][0] 1.0 >>> pandas.read_csv(io.StringIO("column\n1.00000000000000001"))["column"][0] 1.0 >>> pandas.read_csv(io.StringIO("column\n1.00000000000000008"))["column"][0]

Dealing with small numbers and accuracy

瘦欲@ 提交于 2019-12-06 08:57:49
I have a program where I deal with a lot of very small numbers (towards the lower end of the Double limits). During the execution of my application, some of these numbers progressively get smaller meaning their "estimation" is less accurate. My solution at the moment is scaling them up before I do any calculations and then scaling them back down again? ...but it's got me thinking, am I actually gaining any more "accuracy" by doing this? Thoughts? Are your numbers really in the region between 10^-308 (smallest normalized double) and 10^-324 (smallest representable double, denormalized i.e.

Java float is more precise than double?

情到浓时终转凉″ 提交于 2019-12-06 02:22:20
问题 Code: class Main { public static void main (String[] args) { System.out.print("float: "); System.out.println(1.35f-0.00026f); System.out.print("double: "); System.out.println(1.35-0.00026); } } Output: float: 1.34974 double: 1.3497400000000002 ??? float got the right answer, but double is adding extra stuff from no where, Why?? Isn't double supposed to be more precise than float? 回答1: A float is 4 bytes wide, whereas a double is 8 bytes wide. Check What Every Computer Scientist Should Know

Alternative to C++11's std::nextafter and std::nexttoward for C++03?

那年仲夏 提交于 2019-12-06 01:22:45
As the title says, the functionality I'm after is provided by C++11's math libraries to find the next floating point value towards a particular value. Aside from pulling the code out of the std library (which I may have to resort to), any alternatives to do this with C++03 (using GCC 4.4.6)? Platform dependently, assuming IEEE754, and modulo endianness, you can store the data of the floating point number in an integer, increment by one, and retrieve the result: float input = 3.15; uint32_t tmp; unsigned char * p = reinterpret_cast<unsigned char *>(&tmp); unsigned char * q = reinterpret_cast

NumberFormat Parse Issue

人走茶凉 提交于 2019-12-06 00:53:48
I am quite confused about this peculiar 'error' I am getting when parsing a String to a Double. I've already set up the NumberFormat properties and symbols. When passing a String with 15 digits and 2 decimals (ex. str = "333333333333333,33" ) and parsing it with Number num = NumberFormat.parse(str) the result is omitting a digit. The actual value of num is 3.333333333333333E14 . It seems to be working with Strings with all 1's, 2's and 4's though... Anyone can enlighten me? Cheers Enrico The short answer; due to round error (double) 111111111111111.11 != (double) 111111111111111.1 but (double)

Why and how does python truncate numerical data?

社会主义新天地 提交于 2019-12-06 00:51:40
Am dealing with two variables here, but confused because their values seem to be changing (they loose precision) when I want to send them as URL parameters as they are. Look at this scenario as I reproduce it here from the python interpreter: >>> lat = 0.33245794180134 >>> long = 32.57355093956 >>> lat 0.33245794180133997 >>> long 32.57355093956 >>> nl = str(lat) >>> nl '0.332457941801' >>> nlo = str(long) >>> nlo '32.5735509396' So what is happening? and how can I ensure that when I serialize lat and long to strings and send them as part of a url's query string I don't lose their exact

Formatting floating-point numbers without loss of precision in AngularJS

大城市里の小女人 提交于 2019-12-05 22:05:57
In AngularJS how do I output a floating point number on an HTML page without loss of precision and without unnecessary padding with 0's? I've considered the "number" ng-filter ( https://docs.angularjs.org/api/ng/filter/number ) but the fractionSize parameter causes a fixed number of decimals: {{ number_expression | number : fractionSize}} I'm looking for what in various other languages is referred to as "exact reproducibility", "canonical string representation", repr, round-trip, etc. but I haven't been able to find anything similar for AngularJS. For example: 1 => "1" 1.2 => "1.2" 1.23456789

sprintf(buf, “%.20g”, x) // how large should buf be?

≯℡__Kan透↙ 提交于 2019-12-05 21:47:27
I am converting double values to string like this: std::string conv(double x) { char buf[30]; sprintf(buf, "%.20g", x); return buf; } I have hardcoded the buffer size to 30, but am not sure if this is large enough for all cases. How can I find out the maximum buffer size I need? Does the precision get higher (and therefore buffer needs to increase) when switching from 32bit to 64? PS: I cannot use ostringstream or boost::lexical_cast for performance reason (see this ) I have hardcoded the buffer size to 30, but am not sure if this is large enough for all cases. It is. %.20g specifies 20 digits

Error due to limited precision of float and double

心已入冬 提交于 2019-12-05 21:05:01
In C++, I use the following code to work out the order of magnitude of the error due to the limited precision of float and double: float n=1; float dec = 1; while(n!=(n-dec)) { dec = dec/10; } cout << dec << endl; (in the double case all I do is exchange float with double in line 1 and 2) Now when I compile and run this using g++ on a Unix system, the results are Float 10^-8 Double 10^-17 However, when I compile and run it using MinGW on Windows 7, the results are Float 10^-20 Double 10^-20 What is the reason for this? I guess I'll make my comment an answer and expand on it. This is my