floating-accuracy

How to get around rounding issues in floating point arithmetic in C++?

梦想与她 提交于 2019-12-07 18:43:43
问题 Im running into some issues with floating point arithmetic not being accurate. I'm trying to calculate a score based on a weighted formula where every input variable weighs about as much as 20 times the next significant one. The inputs however are real numbers, so I ended up using a double to store the result. The code below has the problem of losing the difference between E1 and E2. This code is performance sensitive, so I need to find an efficient answer to this problem. I thought of

NumberFormat Parse Issue

萝らか妹 提交于 2019-12-07 14:13:38
问题 I am quite confused about this peculiar 'error' I am getting when parsing a String to a Double. I've already set up the NumberFormat properties and symbols. When passing a String with 15 digits and 2 decimals (ex. str = "333333333333333,33" ) and parsing it with Number num = NumberFormat.parse(str) the result is omitting a digit. The actual value of num is 3.333333333333333E14 . It seems to be working with Strings with all 1's, 2's and 4's though... Anyone can enlighten me? Cheers Enrico 回答1:

lossless conversion of float to string and back: is it possible?

こ雲淡風輕ζ 提交于 2019-12-07 11:17:30
问题 This question refers to the IEEE standard floating point numbers used on C/x86. Is it possible to represent any numeric (i.e. excluding special values such as NaN) float or double as a decimal string such that converting that string back to a float/double will always yield exactly the original number? If not, what algorithm tells me whether a given number will suffer a conversion error? If so, consider this: some decimal fractions, when converted to binary, will not be numerically the same as

R's t-distribution says “full precision may not have been achieved”

感情迁移 提交于 2019-12-07 05:45:13
问题 I am working with a problem that routinely needs to compute the density of the t distribution rather far in the tails in R. For example, using R's t distribution function, dt(1.424781, 1486, -5) returns [1] 2.75818e-10 . Some of my final outputs (using this density as an input) do not match a reference value from analogous computations performed in MATLAB by my colleague, which I think may be due to imprecision in the t distribution's tails in R. If I compare to MATLAB's t distribution

C++ vs Python precision

与世无争的帅哥 提交于 2019-12-07 03:54:24
问题 Trying out a problem of finding the first k digits of a num^num I wrote the same program in C++ and Python C++ long double intpart,num,f_digit,k; cin>>num>>k; f_digit= pow(10.0,modf(num*log10(num),&intpart)+k-1); cout<<f_digit; Python (a,b) = modf(num*log10(num)) f_digits = pow(10,b+k-1) print f_digits Input 19423474 9 Output C++ > 163074912 Python > 163074908 I checked the results the C++ solution is the accurate one. Checked it at http://www.wolframalpha.com/input/?i=19423474^19423474 Any

How to force 32bits floating point calculation consistency across different platforms?

筅森魡賤 提交于 2019-12-06 21:09:36
I have a simple piece of code that operates with floating points. Few multiplications, divisions, exp(), subtraction and additions in a loop. When I run the same piece of code on different platforms (like PC, Android phones, iPhones) I get slightly different results. The result is pretty much equal on all the platforms but has a very small discrepancy - typically 1/1000000 of the floating point value. I suppose the reason is that some phones don't have floating point registers and just simulate those calculations with integers, some do have floating point registers but have different

How to convert strings to floats with perfect accuracy?

Deadly 提交于 2019-12-06 20:42:55
问题 I'm trying to write a function in the D programming language to replace the calls to C's strtold. (Rationale: To use strtold from D, you have to convert D strings to C strings, which is inefficient. Also, strtold can't be executed at compile time.) I've come up with an implementation that mostly works, but I seem to lose some precision in the least significant bits. The code to the interesting part of the algorithm is below and I can see where the precision loss comes from, but I don't know

Numpy: Difference between dot(a,b) and (a*b).sum()

半城伤御伤魂 提交于 2019-12-06 17:18:24
问题 For 1-D numpy arrays, this two expressions should yield the same result (theorically): (a*b).sum()/a.sum() dot(a, b)/a.sum() The latter uses dot() and is faster. But which one is more accurate? Why? Some context follows. I wanted to compute the weighted variance of a sample using numpy. I found the dot() expression in another answer, with a comment stating that it should be more accurate. However no explanation is given there. 回答1: Numpy dot is one of the routines that calls the BLAS library

C++ I've just read that floats are inexact and do not store exact integer values. What does this mean?

柔情痞子 提交于 2019-12-06 12:32:24
I am thinking of this at a binary level. would a float of value 1 and an integer of value 1 not compile down to (omitting lots of zeros here) 0001 If they do both compile down to this then where does this inexactness come in. Resource I'm using is http://www.cprogramming.com/tutorial/lesson1.html Thanks. Maciej Stachowski It's possible. Floating point numbers are represented in an exponential notation (a*2^n), where some bits represent a (the significand ), and some bits represent n (the exponent ). You can't uniquely represent all the integers in the range of a floating point value, due to

How to avoid floating point round off error in unit tests?

爱⌒轻易说出口 提交于 2019-12-06 12:18:45
问题 I'm trying to write unit tests for some simple vector math functions that operate on arrays of single precision floating point numbers. The functions use SSE intrinsics and I'm getting false positives (at least I think) when running the tests on a 32-bit system (the tests pass on 64-bit). As the operation runs through the array, I accumulate more and more round off error. Here is a snippet of unit test code and output (my actual question(s) follow): Test Setup: static const int N = 1024;