precision

Testing for floating-point value equality: Is there a standard name for the “precision” constant?

别说谁变了你拦得住时间么 提交于 2019-12-25 02:44:27
问题 I just read this nice answer given on how to compare floating-point values for equality. The following (slightly modified by me) is suggested instead of straight-forward comparison to 0: const double epsilon = 1e-5; double d = ...; if (Math.Abs(d) < epsilon) { // d is considered equal to 0. } My question is about the name of the variable epsilon 1) . Is "epsilon" the generally agreed-upon name for specifying the precision of floating-point numbers? (…which is the smallest discriminating

Why this thing happens with random matrix such that all rows sum up to 1?

给你一囗甜甜゛ 提交于 2019-12-25 02:25:49
问题 I would like to generate a random matrix whose rows sum up to one. I found this question and answers How to create a random matrix such that all rows sum to 1 which do what I exactly want. The problem is when I did it, sometimes the sum is not exactly one. mat = rand(2, 2); rowsum = sum(mat, 2); mat = bsxfun(@rdivide, mat, rowsum); Sometimes I get something like this: sum_test = sum(mat, 2)-1 1.0e-15 * 0 -0.2220; I do not know why? 回答1: Matlab uses double-precision numbers, which means there

How to remove the .0 when reading CSV with Pandas

走远了吗. 提交于 2019-12-25 01:27:24
问题 I have a CSV file im reading into pandas dataframe. All the numbers do not have any decimal places, but as soon as I read it into the dframe it adds a trailing zero to the number with a decimal. 1205 becomes 1205.0 How do I get rid of the 0 during pd.read_csv? I know i can drop the .0 after it has been read into the dataframe, but i really need it not to happen at all. I have tried float_precision='round_trip' I have tried to force the dtype during read_csv Some of the code i tried: df = pd

R numeric to char precision loss

不打扰是莪最后的温柔 提交于 2019-12-25 01:11:39
问题 I want to convert my many-digit numeric vector to character. I tried the following solutions here which works for one number but not for a vector. This is OK options(digits=20) options(scipen=99999) x<-129483.19999999999709;format(round(x, 12), nsmall = 12) [1] "129483.199999999997" But this is not. how to keep numeric precision in characters for numeric vectors? > y <- c(129483.19999999999709, 1.3546746874,687676846.2546746464) Specially problematic is 687676846.2546746464 Also tried: >

Large float and double numbers in java printing/persisting incorrectly. Is this behavior due to number of significant digits?

戏子无情 提交于 2019-12-25 00:37:18
问题 In an application I am working some numbers get converted and saved from long(18 digits) to float/double. These numbers are like Reference/Id's but not used for calculations. Recently I noticed some discrepancies in data being stored as float/double. I am trying to understand if the behavior is due to what floating point numbers call significant digits and maybe a simple explanation for the same. My questions based on below program are Output no : 5 shows a really big number(39 digits before

Transform float to int so that every information is preserved OR how to get the lenght of a long float

不想你离开。 提交于 2019-12-24 18:02:47
问题 In Python, I receive two floats e.g. a = 0.123456 b = 0.012340 and a precision p = 0.000001 that tells me there will be no digits beyond the 6th decimal of the floats. I want to transform the floats to two integers, so that every information they carry is represented in the integers. int_a = 137632 int_b = 12340 The solution in this case is obviously to multiply them by 1000000, but I can't figure out a smart way to get there. I tried the workaround to get the number of digits in p by: len

Coverity finding: Not restoring ostream format (STREAM_FORMAT_STATE)

試著忘記壹切 提交于 2019-12-24 16:42:52
问题 We are catching a Coverity finding CID 156014: Not restoring ostream format (STREAM_FORMAT_STATE) (text below and image at the end). 938 const std::streamsize oldp = cout.precision(6); 5. format_changed: setf changes the format state of std::cout for category floatfield. 939 const std::ios::fmtflags oldf = cout.setf(std::ios::fixed, std::ios::floatfield); 940 cout << " Maurer Randomness Test returned value " << mv << endl; 6. format_changed: precision changes the format state of std::cout for

Actual long double precision does not agree with std::numeric_limits

情到浓时终转凉″ 提交于 2019-12-24 16:01:37
问题 Working on Mac OS X 10.6.2, Intel, with i686-apple-darwin10-g++-4.2.1, and compiling with the -arch x86_64 flag, I just noticed that while... std::numeric_limits<long double>::max_exponent10 = 4932 ...as is expected, when a long double is actually set to a value with exponent greater than 308, it becomes inf--ie in reality it only has 64bit precision instead of 80bit. Also, sizeof() is showing long doubles to be 16 bytes, which they should be. Finally, using <limits.h> gives the same results

Comparing a double and int, without casting or conversion

给你一囗甜甜゛ 提交于 2019-12-24 14:16:12
问题 In one of the C++ modules we have, we have an expression evaluation language. \ EVDataElement NAME::eval( const EvalContext &ec, \ const bool recursiveFlag, \ EVEvaluatorTraceFormatter * trace ) \ { \ /* EVTimer timer("(DECLARE_REL_EVAL)","eval","*", "", 1,1, 3); */ \ EVDataElement val ( \ (left->eval(ec, recursiveFlag, trace)) \ OP (right->eval(ec, recursiveFlag, trace)) ); \ return val; \ } DECLARE_REL_EVAL(oLT,<) DECLARE_REL_EVAL(oLE,<=) DECLARE_REL_EVAL(oGT,>) DECLARE_REL_EVAL(oGE,>=)

Approximation of arcsin in C

寵の児 提交于 2019-12-24 12:12:08
问题 I've got a program that calculates the approximation of an arcsin value based on Taylor's series. My friend and I have come up with an algorithm which has been able to return the almost "right" values, but I don't think we've done it very crisply. Take a look: double my_asin(double x) { double a = 0; int i = 0; double sum = 0; a = x; for(i = 1; i < 23500; i++) { sum += a; a = next(a, x, i); } } double next(double a, double x, int i) { return a*((my_pow(2*i-1, 2)) / ((2*i)*(2*i+1)*my_pow(x, 2)