floating-accuracy

What exactly is the “resolution” parameter of numpy float

╄→гoц情女王★ 提交于 2019-12-18 13:39:10
问题 I am seeking some more understanding about the "resolution" parameter of a numpy float (I guess any computer defined float for that matter). Consider the following script: import numpy as np a = np.finfo(10.1) print a I get an output which among other things prints out: precision=15 resolution= 1.0000000000000001e-15 max= 1.797(...)e+308 min= -max The numpy documentation specifies: "resolution: (floating point number of the appropriate type) The approximate decimal resolution of this type, i

Accuracy of floating point arithmetic

我的未来我决定 提交于 2019-12-18 07:48:05
问题 I'm having trouble understanding the output of this program int main() { double x = 1.8939201459282359e-308; double y = 4.9406564584124654e-324; printf("%23.16e\n", 1.6*y); printf("%23.16e\n", 1.7*y); printf("%23.16e\n", 1.8*y); printf("%23.16e\n", 1.9*y); printf("%23.16e\n", 2.0*y); printf("%23.16e\n", x + 1.6*y); printf("%23.16e\n", x + 1.7*y); printf("%23.16e\n", x + 1.8*y); printf("%23.16e\n", x + 1.9*y); printf("%23.16e\n", x + 2.0*y); } The output is 9.8813129168249309e-324 9

Why don't operations on double-precision values give expected results?

旧巷老猫 提交于 2019-12-18 06:49:07
问题 System.out.println(2.14656); 2.14656 System.out.println(2.14656%2); 0.14656000000000002 WTF? 回答1: The do give the expected results. Your expectations are incorrect. When you type the double-precision literal 2.14656 , what you actually get is the closest double-precision value, which is: 2.14656000000000002359001882723532617092132568359375 the println happens to round this when it prints it out (to 17 significant digits), so you see the nice value that you expect. After the modulus operation

C fundamentals: double variable not equal to double expression?

孤者浪人 提交于 2019-12-18 05:55:07
问题 I am working with an array of doubles called indata (in the heap, allocated with malloc), and a local double called sum . I wrote two different functions to compare values in indata , and obtained different results. Eventually I determined that the discrepancy was due to one function using an expression in a conditional test, and the other function using a local variable in the same conditional test. I expected these to be equivalent. My function A uses: if (indata[i]+indata[j] > max) hi++;

Is hardcode float precise if it can be represented by binary format in IEEE 754?

混江龙づ霸主 提交于 2019-12-18 04:44:12
问题 for example, 0 , 0.5, 0.15625 , 1 , 2 , 3... are values converted from IEEE 754. Are their hardcode version precise? for example: is float a=0; if(a==0){ return true; } always return true? other example: float a=0.5; float b=0.25; float c=0.125; is a * b always equal to 0.125 and a * b==c always true? And one more example: int a=123; float b=0.5; is a * b always be 61.5? or in general, is integer multiply by IEEE 754 binary float precise? Or a more general question: if the value is hardcode

php intval() and floor() return value that is too low?

淺唱寂寞╮ 提交于 2019-12-18 04:39:06
问题 Because the float data type in PHP is inaccurate, and a FLOAT in MySQL takes up more space than an INT (and is inaccurate), I always store prices as INTs, multipling by 100 before storing to ensure we have exactly 2 decimal places of precision. However I believe PHP is misbehaving. Example code: echo "<pre>"; $price = "1.15"; echo "Price = "; var_dump($price); $price_corrected = $price*100; echo "Corrected price = "; var_dump($price_corrected); $price_int = intval(floor($price_corrected));

Integer exponentiation in OCaml

僤鯓⒐⒋嵵緔 提交于 2019-12-17 19:48:30
问题 Is there a function for integer exponentiation in OCaml? ** is only for floats. Although it seems to be mostly accurate, isn't there a possibility of precision errors, something like 2. ** 3. = 8. returning false sometimes? Is there a library function for integer exponentiation? I could write my own, but efficiency concerns come into that, and also I'd be surprised if there isn't such a function already. 回答1: Regarding the floating-point part of your question: OCaml calls the underlying

printing float, preserving precision

北城以北 提交于 2019-12-17 18:35:04
问题 I am writing a program that prints floating point literals to be used inside another program. How many digits do I need to print in order to preserve the precision of the original float? Since a float has 24 * (log(2) / log(10)) = 7.2247199 decimal digits of precision, my initial thought was that printing 8 digits should be enough. But if I'm unlucky, those 0.2247199 get distributed to the left and to the right of the 7 significant digits, so I should probably print 9 decimal digits. Is my

Numpy dot too clever about symmetric multiplications

倾然丶 夕夏残阳落幕 提交于 2019-12-17 18:34:32
问题 Anybody know about documentation for this behaviour? import numpy as np A = np.random.uniform(0,1,(10,5)) w = np.ones(5) Aw = A*w Sym1 = Aw.dot(Aw.T) Sym2 = (A*w).dot((A*w).T) diff = Sym1 - Sym2 diff.max() is near machine-precision non-zero , e.g. 4.4e-16. This (the discrepancy from 0) is usually fine... in a finite-precision world we should not be surprised. Moreover, I would guess that numpy is being smart about symmetric products, to save flops and ensure symmetric output... But I deal

double arithmetic and equality in Java

ⅰ亾dé卋堺 提交于 2019-12-17 16:56:08
问题 Here's an oddity (to me, at least). This routine prints true: double x = 11.0; double y = 10.0; if (x-y == 1.0) { // print true } else { // print false } But this routine prints false: double x = 1.1; double y = 1.0; if (x-y == 0.1) { // print true } else { // print false } Anyone care to explain what's going on here? I'm guessing it has something to do with integer arithmetic for int s posing as float s. Also, are there other bases (other than 10 ) that have this property? 回答1: 1.0 has an