precision

How to perform unittest for floating point outputs? - python

我只是一个虾纸丫 提交于 2019-12-19 05:07:30
问题 Let's say I am writing a unit test for a function that returns a floating point number, I can do it as such in full precision as per my machine: >>> import unittest >>> def div(x,y): return x/float(y) ... >>> >>> class Testdiv(unittest.TestCase): ... def testdiv(self): ... assert div(1,9) == 0.1111111111111111 ... >>> unittest.main() . ---------------------------------------------------------------------- Ran 1 test in 0.000s OK Will the same full floating point precision be the same across

How to quadruple an unsigned number using bit-wise and logic operator in C

懵懂的女人 提交于 2019-12-19 05:02:11
问题 Goal: 4x ( 4.400000095 ) = 17.60000038 Legal ops: Any integer/unsigned operations incl. ||, &&. also if, while Max ops: 30 Return bit-level equivalent of expression x + x + x + x for floating point argument f. My code: unsigned 4x(unsigned uf) { unsigned expn = (uf >> 23) & 0xFF; unsigned sign = uf & 0x80000000; unsigned frac = uf & 0x007FFFFF; if (expn == 255 || (expn == 0 && frac == 0)) return uf; if (expn) { expn << 2; } else if (frac == 0x7FFFFF) { frac >> 2; expn << 2; } else { frac <<=

How to quadruple an unsigned number using bit-wise and logic operator in C

℡╲_俬逩灬. 提交于 2019-12-19 05:02:01
问题 Goal: 4x ( 4.400000095 ) = 17.60000038 Legal ops: Any integer/unsigned operations incl. ||, &&. also if, while Max ops: 30 Return bit-level equivalent of expression x + x + x + x for floating point argument f. My code: unsigned 4x(unsigned uf) { unsigned expn = (uf >> 23) & 0xFF; unsigned sign = uf & 0x80000000; unsigned frac = uf & 0x007FFFFF; if (expn == 255 || (expn == 0 && frac == 0)) return uf; if (expn) { expn << 2; } else if (frac == 0x7FFFFF) { frac >> 2; expn << 2; } else { frac <<=

PHP bitwise left shifting 32 spaces problem and bad results with large numbers arithmetic operations

∥☆過路亽.° 提交于 2019-12-19 04:14:49
问题 I have the following problems: First: I am trying to do a 32-spaces bitwise left shift on a large number, and for some reason the number is always returned as-is. For example: echo(516103988<<32); // echoes 516103988 Because shifting the bits to the left one space is the equivalent of multiplying by 2, i tried multiplying the number by 2^32, and it works, it returns 2216649749795176448. Second: I have to add 9379 to the number from the above point: printf('%0.0f', 2216649749795176448 + 9379);

What defines floating point precision in python?

独自空忆成欢 提交于 2019-12-19 03:24:23
问题 I learnt of the "exactly equal to" operator in Erlang, which compares not only values, but also data types of numbers, and I was curious about how things work in Python and its lone "equals to" operator. So after making sure that >>> 1 == 1.0 True I wondered about the floating point precision, and got to this >>> 0.9999999999999999 == 1 False >>> 0.99999999999999999 == 1 True >>> Could someone explain how floating point precision is determined here? It works the same in both 2.7.1 and 3.1.2

Precision of multiplication by 1.0 and int to float conversion

梦想与她 提交于 2019-12-18 13:52:32
问题 Is it safe to assume that the condition (int)(i * 1.0f) == i is true for any integer i ? 回答1: No. If i is sufficiently large that int(float(i)) != i (assuming float is IEEE-754 single precision, i = 0x1000001 suffices to exhibit this) then this is false, because multiplication by 1.0f forces a conversion to float , which changes the value even though the subsequent multiplication does not. However, if i is a 32-bit integer and double is IEEE-754 double, then it is true that int(i*1.0) == i .

What exactly is the “resolution” parameter of numpy float

╄→гoц情女王★ 提交于 2019-12-18 13:39:10
问题 I am seeking some more understanding about the "resolution" parameter of a numpy float (I guess any computer defined float for that matter). Consider the following script: import numpy as np a = np.finfo(10.1) print a I get an output which among other things prints out: precision=15 resolution= 1.0000000000000001e-15 max= 1.797(...)e+308 min= -max The numpy documentation specifies: "resolution: (floating point number of the appropriate type) The approximate decimal resolution of this type, i

Haskell: Force floats to have two decimals

删除回忆录丶 提交于 2019-12-18 11:44:15
问题 Using the following code snippet: (fromIntegral 100)/10.00 Using the Haskell '98 standard prelude, how do I represent the result with two decimals? Thanks. 回答1: Just for the record: import Numeric formatFloatN floatNum numOfDecimals = showFFloat (Just numOfDecimals) floatNum "" 回答2: You can use printf :: PrintfType r => String -> r from Text.Printf: Prelude> import Text.Printf Prelude Text.Printf> printf "%.2f\n" (100 :: Float) 100.00 Prelude Text.Printf> printf "%.2f\n" $ fromIntegral 100 /

Convert Java Number to BigDecimal : best way

て烟熏妆下的殇ゞ 提交于 2019-12-18 11:38:19
问题 I am looking for the best way to convert a Number to a BigDecimal. Is this good enough? Number number; BigDecimal big = new BigDecimal(number.toString()); Can we lose precision with the toString() method ? 回答1: This is fine, remember that using the constructor of BigDecimal to declare a value can be dangerous when it's not of type String. Consider the below... BigDecimal valDouble = new BigDecimal(0.35); System.out.println(valDouble); This will not print 0.35, it will infact be... 0

Why does 5/2 results in '2' even when I use a float? [duplicate]

最后都变了- 提交于 2019-12-18 09:43:47
问题 This question already has answers here : What is the behavior of integer division? (5 answers) Closed 3 years ago . I entered the following code (and had no compiling problems or anything): float y = 5/2; printf("%f\n", y); The output was simply: 2.00000 My math isn't wrong is it? Or am I wrong on the / operator? It means divide doesn't it? And 5/2 should equal 2.5? Any help is greatly appreciated! 回答1: 5 is an int and 2 is an int . Therefore, 5/2 will use integer division. If you replace 5