precision

How accurate is “double-precision floating-point format”?

老子叫甜甜 提交于 2020-01-06 05:25:32
问题 Let's say, using java, I type double number; If I need to use very big or very small values, how accurate can they be? I tried to read how doubles and floats work, but I don't really get it. For my term project in intro to programming, I might need to use different numbers with big ranges of value (many orders of magnitude). Let's say I create a while loop, while (number[i-1] - number[i] > ERROR) { //does stuff } Does the limitation of ERROR depend on the size of number[i]? If so, how can I

Exact representation of integers in floating points

こ雲淡風輕ζ 提交于 2020-01-05 05:19:35
问题 I am trying to understand the representation of integers in floating point format. Since the IEEE floating point format have only 23 bits for mantissa, i expect any integer which is greater than 1<<22 to be only a approx representation. This is not what i am observing in g++ both of the cout below prints the same value 33554432. Since the mantissa part is the one which is responsible for the precision how can we be able to represent (store) exact number which need more than 23 bits to be

Is IEEE 754 floating point representation wasting memory?

爷,独闯天下 提交于 2020-01-05 04:35:29
问题 I always thought that there are 2^64 different fractional values that can be stored by a variable of type double. (Each bit can have either 1 or 0 as value and so 2^64 different values). Recently I came to know that NaN (not a number) has a representation in which exponent part is 11111111111 and significand part is any non-zero value. Instead, if it were like the representation is NaN if exponent part is 11111111111 and significand part is 111111......(52 times) ? Won't this allow us to

What is the best field definition to store a .NET decimal into MySQL?

蹲街弑〆低调 提交于 2020-01-04 06:21:13
问题 I need to store decimals into MySQL, which can have a varying precision. Therefore I would be interested to know which MySQL field type is absolutely equivalent to .NET's decimal structure, if any. I plan to use Dapper as a lightweight ORM. 回答1: The .net decimal can be different datatypes under the hood. .net formats MySQL ---------------------------------------------------- Decimal(Double) Float Decimal(Int32) DECIMAL Decimal(Int32()) DECIMAL Decimal(Int64) DECIMAL Decimal(Single) DECIMAL

Haskell equation solving in the real numbers

岁酱吖の 提交于 2020-01-04 05:29:31
问题 I've just started playing with GHCi. I see that list generators basically solve an equation within a given set: Prelude> [x | x <- [1..20], x^2 == 4] [2] (finds only one root, as expected) Now, why can't I solve equations with results in ℝ, given that the solution is included in the specified range? [x | x <- [0.1,0.2..2.0], x*4 == 2] How can I solve such equations within real numbers set? Edit: Sorry, I meant 0.1 , of course. 回答1: As others have mentioned, this is not an efficient way to

How are these double precision values accurate to 20 decimals?

泪湿孤枕 提交于 2020-01-04 02:26:07
问题 I am testing some very simple equivalence errors when precision is an issue and was hoping to perform the operations in extended double precision (so that I knew what the answer would be in ~19 digits) and then perform the same operations in double precision (where there would be roundoff error in the 16th digit), but somehow my double precision arithmetic is maintaining 19 digits of accuracy. When I perform the operations in extended double, then hardcode the numbers into another Fortran

Unprecise rendering of huge WPF visuals - any solutions?

左心房为你撑大大i 提交于 2020-01-03 17:19:32
问题 When rendering huge visuals in WPF, the visual gets distorted and more distorted with increasing coordinates. I assume that it has something to do with the floating point data types used in the render pipeline, but I'm not completely sure. Either way, I'm searching for a practical solution to solve the problem. To demonstrate what I'm talking about, I created a sample application which just contains a custom control embedded in a ScrollViewer that draws a sine curve. You can see here that the

MS Access Rounding Precision With Group By

两盒软妹~` 提交于 2020-01-03 15:56:15
问题 Why doesn't the average of the score of an employee of each month, when summed, equal the average of the employees score (ever)? Average SELECT Avg(r.score) AS rawScore FROM (ET INNER JOIN Employee AS e ON ET.employeeId = e.id) INNER JOIN (Employee AS a INNER JOIN Review AS r ON a.id = r.employeeId) ON ET.id = r.ETId WHERE (((e.id)=@employeeId)) Returns 80.737 Average By Month SELECT Avg(r.score) AS rawScore, Format(submitDate, 'mmm yy') AS MonthText, month(r.submitDate) as mm, year

Why is -freciprocal-math unsafe in GCC?

吃可爱长大的小学妹 提交于 2020-01-03 07:32:14
问题 -freciprocal-math in GCC changes the following code double a = b / c; to double tmp = 1/c; double a = b * tmp; In GCC manual, it's said that such an optimization is unsafe and is not sticked to IEEE standards. But I cannot think of an example. Could you give an example about this? 回答1: Dividing by 10 and multiplying by 0.1000000000000000055511151231257827021181583404541015625 are not the same thing. 回答2: Perhaps I am thinking of a different compiler flag, but ... Some processors have

Dividing a double with integer

↘锁芯ラ 提交于 2020-01-03 05:43:05
问题 I am facing an issue while dividing a double with an int . Code snippet is : double db = 10; int fac = 100; double res = db / fac; The value of res is 0.10000000000000001 instead of 0.10 . Does anyone know what is the reason for this? I am using cc to compile the code. 回答1: You need to read the classic paper What Every Computer Scientist Should Know About Floating-Point Arithmetic. 回答2: The CPU uses binary representation of numbers. Your result cannot be represented exactly in binary. 0.1 in