floating-accuracy

Decimal rounding errors upon division (C#)

∥☆過路亽.° 提交于 2019-12-10 14:04:34
问题 I have basically four numbers (say 100, 200, 300, 400), and I need to calculate the probability as 100/(100+200+300+400), 200/(100+200+300+400), and so on. When I use the decimal data type to store these probabilities, they don't up to one due to round issues. What's the best way to past this without making the probabilities too inaccurate? Basically I do this calculation many many times, so I don't want to have to change all the divisions into Math.Round stuff. :| 回答1: The solution is

Is it possible to force exponent or significand of a float to match another float (Python)?

筅森魡賤 提交于 2019-12-10 12:49:47
问题 This is an interesting question that I was trying to work through the other day. Is it possible to force the significand or exponent of one float to be the same as another float in Python? The question arises because I was trying to rescale some data so that the min and max match another data set. However, my rescaled data was slightly off (after about 6 decimal places) and it was enough to cause problems down the line. To give an idea, I have f1 and f2 ( type(f1) == type(f2) == numpy.ndarray

How to reduce C/C++ floating-point roundoff

限于喜欢 提交于 2019-12-10 11:52:34
问题 Are there any generally-applicable tips to reduce the accumulation of floating-point roundoff errors in C or C++? I'm thinking mainly about how to write code that gets compiled into optimal assembly language instructions, although strategies on overall algorithm design are also welcome. 回答1: The only trick I know is that when you're summing a bunch of numbers, don't do them one at a time - group them so that the additions are on numbers of approximately the same magnitude. To sum a huge array

Understanding pandas.read_csv() float parsing

假如想象 提交于 2019-12-10 10:58:29
问题 I am having problems reading probabilities from CSV using pandas.read_csv ; some of the values are read as floats with > 1.0 . Specifically, I am confused about the following behavior: >>> pandas.read_csv(io.StringIO("column\n0.99999999999999998"))["column"][0] 1.0 >>> pandas.read_csv(io.StringIO("column\n0.99999999999999999"))["column"][0] 1.0000000000000002 >>> pandas.read_csv(io.StringIO("column\n1.00000000000000000"))["column"][0] 1.0 >>> pandas.read_csv(io.StringIO("column\n1

sprintf(buf, “%.20g”, x) // how large should buf be?

隐身守侯 提交于 2019-12-10 10:12:56
问题 I am converting double values to string like this: std::string conv(double x) { char buf[30]; sprintf(buf, "%.20g", x); return buf; } I have hardcoded the buffer size to 30, but am not sure if this is large enough for all cases. How can I find out the maximum buffer size I need? Does the precision get higher (and therefore buffer needs to increase) when switching from 32bit to 64? PS: I cannot use ostringstream or boost::lexical_cast for performance reason (see this) 回答1: I have hardcoded the

Does std::hash guarantee equal hashes for “equal” floating point numbers?

女生的网名这么多〃 提交于 2019-12-10 04:35:30
问题 Is the floating point specialisation of std::hash (say, for double s or float s) reliable regarding almost-equality? That is, if two values (such as (1./std::sqrt(5.)/std::sqrt(5.)) and .2 ) should compare equal but will not do so with the == operator, how will std::hash behave? So, can I rely on a double as an std::unordered_map key to work as expected? I have seen "Hashing floating point values" but that asks about boost; I'm asking about the C++11 guarantees. 回答1: std::hash has same

PI and accuracy of a floating-point number

倖福魔咒の 提交于 2019-12-10 02:19:23
问题 A single/double/extended-precision floating-point representation of Pi is accurate up to how many decimal places? 回答1: #include <stdio.h> #define E_PI 3.1415926535897932384626433832795028841971693993751058209749445923078164062 int main(int argc, char** argv) { long double pild = E_PI; double pid = pild; float pif = pid; printf("%s\n%1.80f\n%1.80f\n%1.80Lf\n", "3.14159265358979323846264338327950288419716939937510582097494459230781640628620899", pif, pid, pild); return 0; } Results: [quassnoi #

C# wrong subtraction? 12.345 - 12 = 0.345000000000001 [closed]

会有一股神秘感。 提交于 2019-12-10 02:15:59
问题 This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable, visit the help center. Closed 7 years ago . I am beginner in C# and I am working with floating point numbers. I need to do subtraction between these two numbers but it does not work. I know it is

Precision loss from float to double, and from double to float?

笑着哭i 提交于 2019-12-10 01:54:46
问题 float fv = orginal_value; // original_value may be any float value ... double dv = (double)fv; ... fv = (float)dv; SHOULD fv be equal to original_value exactly? Any precision may be lost? 回答1: SHOULD fv be equal to original_value exactly? Any precision may be lost? Yes, if the value of dv did not change in between. From section Conversion 6.3.1.5 Real Floating types in C99 specs: When a float is promoted to double or long double, or a double is promoted to long double, its value is unchanged

How to divide tiny double precision numbers correctly without precision errors?

被刻印的时光 ゝ 提交于 2019-12-10 01:42:07
问题 I'm trying to diagnose and fix a bug which boils down to X/Y yielding an unstable result when X and Y are small: In this case, both cx and patharea increase smoothly. Their ratio is a smooth asymptote at high numbers, but erratic for "small" numbers. The obvious first thought is that we're reaching the limit of floating point accuracy, but the actual numbers themselves are nowhere near it. ActionScript "Number" types are IEE 754 double-precision floats, so should have 15 decimal digits of