precision

Why different result? float vs double [duplicate]

大憨熊 提交于 2021-02-11 16:48:42
问题 This question already has answers here : Float and double datatype in Java (9 answers) Is floating point math broken? (31 answers) Closed 12 months ago . System.out.println(0.1F + 0.2F); // 0.3 System.out.println(0.1D + 0.2D); // 0.30000000000000004 I understood 0.1D + 0.2D ~= 0.30000000000000004. But I guessed these result are same, but it is not. Why result are different? 回答1: Why are the results different? In a general sense: Because the binary representations for float and double are

Why different result? float vs double [duplicate]

青春壹個敷衍的年華 提交于 2021-02-11 16:48:16
问题 This question already has answers here : Float and double datatype in Java (9 answers) Is floating point math broken? (31 answers) Closed 12 months ago . System.out.println(0.1F + 0.2F); // 0.3 System.out.println(0.1D + 0.2D); // 0.30000000000000004 I understood 0.1D + 0.2D ~= 0.30000000000000004. But I guessed these result are same, but it is not. Why result are different? 回答1: Why are the results different? In a general sense: Because the binary representations for float and double are

How to safely import timestamps with Nanosecond precision

ぐ巨炮叔叔 提交于 2021-02-11 06:10:41
问题 I’ve discovered this morning that bulk of timestamp formats in R seem to be posix.ct class based, which seems to be risky for use with nano second timestamps due to rounding and accumulation errors. Is this true? If so, What packages and processing steps are needed to safely import timestamps in nano seconds precision - probably from csv files? (Preferably staying with packages within tidyverse) Output Visual tools used currently are ggplot2 , plotly, and d3 回答1: We wrote a package for that:

How to safely import timestamps with Nanosecond precision

匆匆过客 提交于 2021-02-11 06:01:33
问题 I’ve discovered this morning that bulk of timestamp formats in R seem to be posix.ct class based, which seems to be risky for use with nano second timestamps due to rounding and accumulation errors. Is this true? If so, What packages and processing steps are needed to safely import timestamps in nano seconds precision - probably from csv files? (Preferably staying with packages within tidyverse) Output Visual tools used currently are ggplot2 , plotly, and d3 回答1: We wrote a package for that:

Python “decimal” package gives wrong results

爷,独闯天下 提交于 2021-02-10 23:39:57
问题 I tried to compute the following by setting getcontext().prec = 800 . >>> from decimal import * >>> getcontext().prec = 800 >>> Decimal(22.0) / Decimal ( 10.0) - Decimal ( 0.2 ) Decimal('1.999999999999999988897769753748434595763683319091796875') >>> But the expected result is 2 . Where am I doing wrong? 回答1: When you construct a Decimal from a floating-point number, you get the exact value of the floating-point number, which may not precisely match the decimal value because that's how

Python “decimal” package gives wrong results

夙愿已清 提交于 2021-02-10 23:33:22
问题 I tried to compute the following by setting getcontext().prec = 800 . >>> from decimal import * >>> getcontext().prec = 800 >>> Decimal(22.0) / Decimal ( 10.0) - Decimal ( 0.2 ) Decimal('1.999999999999999988897769753748434595763683319091796875') >>> But the expected result is 2 . Where am I doing wrong? 回答1: When you construct a Decimal from a floating-point number, you get the exact value of the floating-point number, which may not precisely match the decimal value because that's how

Python “decimal” package gives wrong results

萝らか妹 提交于 2021-02-10 23:32:08
问题 I tried to compute the following by setting getcontext().prec = 800 . >>> from decimal import * >>> getcontext().prec = 800 >>> Decimal(22.0) / Decimal ( 10.0) - Decimal ( 0.2 ) Decimal('1.999999999999999988897769753748434595763683319091796875') >>> But the expected result is 2 . Where am I doing wrong? 回答1: When you construct a Decimal from a floating-point number, you get the exact value of the floating-point number, which may not precisely match the decimal value because that's how

Python “decimal” package gives wrong results

*爱你&永不变心* 提交于 2021-02-10 23:32:02
问题 I tried to compute the following by setting getcontext().prec = 800 . >>> from decimal import * >>> getcontext().prec = 800 >>> Decimal(22.0) / Decimal ( 10.0) - Decimal ( 0.2 ) Decimal('1.999999999999999988897769753748434595763683319091796875') >>> But the expected result is 2 . Where am I doing wrong? 回答1: When you construct a Decimal from a floating-point number, you get the exact value of the floating-point number, which may not precisely match the decimal value because that's how

BigDecimal/double Precision - number rounds up higher

不想你离开。 提交于 2021-02-10 06:19:36
问题 The second of below method calls, to setYCoordinate(), gets incorrect value -89.99999435599995 instead of -89.99999435599994. The first call to setXCoordinate() gets correct value 29.99993874900002. setXCoordinate(BigDecimal.valueOf(29.99993874900002)) setYCoordinate(BigDecimal.valueOf(-89.99999435599994)) I put a breakpoint inside BigDecimal.valueOf() - this method's code looks as below - public static BigDecimal valueOf(double val) { // Reminder: a zero double returns '0.0', so we cannot

What is the precision of the UITouch timestamp in iOS?

删除回忆录丶 提交于 2021-02-08 16:57:03
问题 How precise is the timestamp property of the UITouch class in iOS? Milliseconds? Tens of milliseconds? I'm comparing an iPad's internal measurements with a custom touch detection circuit taped on the screen, and there is quite a bit of variability between the two (standard deviation ~ 15ms). I've seen it suggested that the timestamp is discretized according to the frame refresh interval, but the distribution I'm getting looks continuous. 回答1: Prior to the iPad Air 2, the touch detection