floating-accuracy

Why are these numbers not equal?

╄→尐↘猪︶ㄣ 提交于 2019-12-25 01:34:50
问题 The following code is obviously wrong. What's the problem? i <- 0.1 i <- i + 0.05 i ## [1] 0.15 if(i==0.15) cat("i equals 0.15") else cat("i does not equal 0.15") ## i does not equal 0.15 回答1: General (language agnostic) reason Since not all numbers can be represented exactly in IEEE floating point arithmetic (the standard that almost all computers use to represent decimal numbers and do math with them), you will not always get what you expected. This is especially true because some values

Newton Raphson iteration - unable to iterate

谁说胖子不能爱 提交于 2019-12-24 10:35:19
问题 I am not sure this question is on topic here or elsewhere (or not on topic at all anywhere). I have inherited Fortran 90 code that does Newton Raphson interpolation where logarithm of temperature is interpolated against logarithm of pressure. The interpolation is of the type t = a ln(p) + b and where a, b are defined as a = ln(tup/tdwn)/(alogpu - alogpd) and b = ln T - a * ln P Here is the test program. It is shown only for a single iteration. But the actual program runs over three FOR loops

Is casting to float destructive?

孤者浪人 提交于 2019-12-24 09:29:25
问题 In PHP, I know we shouldn't do math on floats without things like bcmath, but is the mere act of casting a string to float destructive? Will expressions like (float)'5.111' == '5.111' , always be true? Or will the cast itself change that to something like 5.1110000000000199837 as the number is converted? The main reason is, just as I use (int) to escape integer values going into a database, I would like to use (float) in the same way, without having to rely on quotes and my escape function.

Columnwise sum of array - two methods, two different results

谁说我不能喝 提交于 2019-12-24 00:42:30
问题 In this example, the column-wise sum of an array pr is computed in two different ways: (a) take the sum over the first axis using p.sum 's axis parameter (b) slice the array along the the second axis and take the sum of each slice import matplotlib.pyplot as plt import numpy as np m = 100 n = 2000 x = np.random.random_sample((m, n)) X = np.abs(np.fft.rfft(x)).T frq = np.fft.rfftfreq(n) total = X.sum(axis=0) c = frq @ X / total df = frq[:, None] - c pr = df * X a = np.sum(pr, axis=0) b = [np

Checking that floating points arguments are correct

隐身守侯 提交于 2019-12-23 17:16:37
问题 I want to write a class representing Markov chain (let's name it MC ). It has a constructor, which takes the state transition matrix (that is, vector<vector<double>> . I suppose, it is a good idea to check it is really a matrix (has the same number of rows and columns) and is really a transition matrix: all the numbers in it are probabilities, that is, no less than 0.0 and no greater than 1.0 , and for every row the sum of its elements is 1.0 . However, there is a problem which arises from

How does Rounding in Python work?

浪子不回头ぞ 提交于 2019-12-23 10:28:11
问题 I am a bit confused about how rounding in Python works. Could someone please explain why Python behaves like this? Example: >>> round(0.05,1) # this makes sense 0.1 >>> round(0.15,1) # this doesn't make sense! Why is the result not 0.2? 0.1 And same for: >>> round(0.25,1) # this makes sense 0.3 >>> round(0.35,1) # in my opinion, should be 0.4 but evaluates to 0.3 0.3 Edit: So in general, there is a possibility that Python rounds down instead of rounding up. So am I to understand that the only

What does standard say about cmath functions like std::pow, std::log etc?

大憨熊 提交于 2019-12-23 09:46:46
问题 Does the standard guarantee that functions return the exact same result across all implementations? Take for example pow(float,float) for 32bit IEEE floats. Is the result across all implementations identical if the same two floats are passed in? Or is there some flexibility that the standard allows with regard to tiny differences depending on the algorithm used to implement pow ? 回答1: No, the C++ standard doesn't require the results of cmath functions to be the same across all implementations

Why can't I multiply a float? [duplicate]

给你一囗甜甜゛ 提交于 2019-12-23 08:49:16
问题 This question already has answers here : Closed 9 years ago . Possible Duplicate: Dealing with accuracy problems in floating-point numbers I was quite surprised why I tried to multiply a float in C (with GCC 3.2) and that it did not do as I expected.. As a sample: int main() { float nb = 3.11f; nb *= 10; printf("%f\n", nb); } Displays: 31.099998 I am curious regarding the way floats are implemented and why it produces this unexpected behavior? 回答1: First off, you can multiply floats. The

Ruby number precision with simple arithmetic

时光总嘲笑我的痴心妄想 提交于 2019-12-23 04:43:59
问题 I'm learning Ruby for fun, and for creating websites also (but that's irrelevant). While playing with it, i noticed something "weird" When I compute 4.21 + 5 with irb, it answers 9.21 (weird, right?) when I compute 4.23 + 5, it gives 9.23 (waw, that's definitely weird). and when i type 4.22 + 5, it answers 9.21999... (w...wait! that's really weird). Hence my question: what's going on? I'd understand this behavior with division or really big numbers, but in this simple case....??? Does it mean

Does a floating-point reciprocal always round-trip?

五迷三道 提交于 2019-12-22 09:23:35
问题 For IEEE-754 arithmetic, is there a guarantee of 0 or 1 units in the last place accuracy for reciprocals? From that, is there a guaranteed error-bound on the reciprocal of a reciprocal? 回答1: [Everything below assumes a fixed IEEE 754 binary format, with some form of round-to-nearest as the rounding-mode.] Since reciprocal (computed as 1/x ) is a basic arithmetic operation, 1 is exactly representable, and the arithmetic operations are guaranteed correctly rounded by the standard, the