precision

Floating point exception when reading real values from an input file

六月ゝ 毕业季﹏ 提交于 2019-12-20 02:56:23
问题 I try to read a float value from an input file in Fortran . To do so I use this code : ... INTEGER :: nf REAL :: re OPEN(newunit=nf, file='toto.txt') READ(unit=nf, fmt=*) re ... with toto.txt a text file containing my real value : 10.1001 ! this value is supposed to be read by the Fortran program If I compile and execute like this, everything works well. But I get some trouble when I compile and execute with fpe option. I have a error at the readding line that looks like: Program received

Mysterious behaviour of seq and == operator. A precision issue?

早过忘川 提交于 2019-12-20 01:37:31
问题 I've come across a somehow weird (or just not expected?) behaviour of the function seq . When creating a simple sequence some values cannot be matched correctly with the == operator. See this minimal example: my.seq <- seq(0, 0.4, len = 5) table(my.seq) # ok! returns 0 0.1 0.2 0.3 0.4 # 1 1 1 1 1 which(my.seq == 0.2) # ok! returns 3 which(my.seq == 0.3) # !!! returns integer(0) When creating my sequence manually, it seems to work, though: my.seq2 <- c(0.00, 0.10, 0.20, 0.30, 0.40) which(my

Floating-point equality test and extra precision: can this code fail?

泪湿孤枕 提交于 2019-12-20 01:11:25
问题 The discussion started under my answer to another question. The following code determines machine epsilon : float compute_eps() { float eps = 1.0f; while (1.0f + eps != 1.0f) eps /= 2.0f; return eps; } In the comments it was proposed that the 1.0f + eps != 1.0f test might fail because C++ standard permits the use of extra precision. Although I'm aware that floating-point operations are actually performed in higher precision (than specified by the actual types used), I happen to disagree with

What is the precision of floating point calculations in Scilab?

余生颓废 提交于 2019-12-19 11:36:17
问题 Note: I've used the Matlab tag just in case they maintain the same precision. (From what I can tell both programs are very similar.) As a follow-up to a previous question of mine (here), I'm trying to determine the level of precision I need to set (in a C++ program which I'm currently converting from Scilab code) in order mock the accuracy of the Scilab program. Essentially so both programs with produce the same (or very similar) results. When computing a floating point calculation in Scilab,

Loss of precision on adding doubles?

亡梦爱人 提交于 2019-12-19 11:21:34
问题 folks! I've encountered a little problem: I'm doing a simple addition with three double values. The result has a smaller precision than the used values. double minutes = 3; minutes = minutes / (24.0*60.0); // contains 0.00208333 double hours = 3; hours = hours / 24.0; // contains 0.125 double days = 3; // contains 3 double age = days + hours + minutes; // result is 3.12708 I found no way to avoid this behaviour. 回答1: Nothing seems to be wrong with the calculation as what the comments on your

Writing IEEE 754-1985 double as ASCII on a limited 16 bytes string

牧云@^-^@ 提交于 2019-12-19 09:09:53
问题 This is a follow-up to my original post. But I'll repeat it for clarity: As per DICOM standard, a type of floating point can be stored using a Value Representation of Decimal String. See Table 6.2-1. DICOM Value Representations: Decimal String: A string of characters representing either a fixed point number or a floating point number. A fixed point number shall contain only the characters 0-9 with an optional leading "+" or "-" and an optional "." to mark the decimal point. A floating point

How to actually avoid floating point errors when you need to use float?

牧云@^-^@ 提交于 2019-12-19 08:18:17
问题 I am trying to affect the translation of a 3D model using some UI buttons to shift the position by 0.1 or -0.1. My model position is a three dimensional float so simply adding 0.1f to one of the values causes obvious rounding errors. While I can use something like BigDecimal to retain precision, I still have to convert it from a float and back to a float at the end and it always results in silly numbers that are making my UI look like a mess. I could just pretty the displayed values but the

How can an Oracle NUMBER have a Scale larger than the Precision?

守給你的承諾、 提交于 2019-12-19 08:17:09
问题 The documentation states: "Precision can range from 1 to 38. Scale can range from -84 to 127". How can the scale be larger than the precision? Shouldn't the Scale range from -38 to 38? 回答1: The question could be why not ? Try the following SQL. select cast(0.0001 as number(2,5)) num, to_char(cast(0.0001 as number(2,5))) cnum, dump(cast(0.0001 as number(2,5))) dmp from dual What you see is that you can hold small numbers is that sort of structure It might not be required very often, but I'm

C# loss of precision when dividing doubles

陌路散爱 提交于 2019-12-19 07:39:10
问题 I know this has been discussed time and time again, but I can't seem to get even the most simple example of a one-step division of doubles to result in the expected, unrounded outcome in C# - so I'm wondering if perhaps there's i.e. some compiler flag or something else strange I'm not thinking of. Consider this example: double v1 = 0.7; double v2 = 0.025; double result = v1 / v2; When I break after the last line and examine it in the VS debugger, the value of "result" is 27.999999999999996. I

tan 45 gives me 0.9999

荒凉一梦 提交于 2019-12-19 06:37:10
问题 Why does tan 45 (0.7853981633974483 in radian) give me 0.9999 ? What's wrong with the following code? System.out.println(Math.tan(Math.toRadians(45.0)) ); I don't think there's any typo in here. So what's the solution here? 回答1: Floating point calculations will often lead to such inaccuracies. The problem is that numbers cannot be accurately represented within a fixed number of bits. To give you another example (in decimal), we all agree that 3 * (1/3) = 1 . However, if your calculator only