问题
From "http://cran.r-project.org/doc/FAQ/R-FAQ.html#Why-doesn_0027t-R-think-the" 7.31
We already know that large number (over 2^53) can make an error in modular operation.
However, I cannot understand why all the large number is regarded as even(I have never seen "odd" of large integer which is over 2^53) even though I take some errors in approximation
(2^53+1)%%2
(2^100-1)%%2
error message(probable complete loss of accuracy in modulus) can be ignored
etc..
are all not 1 but 0
why so? (I know there is some approximation, but I need to know the reason concretely)
> print(2^54,22)
[1] 18014398509481984.00000
> print(2^54+1,22)
[1] 18014398509481984.00000
> print(2^54+2,22)
[1] 18014398509481984.00000
> print(2^54+3,22)
[1] 18014398509481988.0000
回答1:
An IEEE double precision value has a 53-bit mantissa. Any number requiring more than 53 binary digits of precision will be rounded, i.e. the digits from 54 onwards will be implicitly set to zero. Thus any number with magnitude greater than 2^53 will necessarily be even (since the least-significant bit of its integer representation is beyond the floating-point precision, and is therefore zero).
回答2:
There is no such thing as an "integer" in versions of R at or earlier that v2.15.3 whose magnitude was greater than 2^31-1. You are working with "numeric" or "double" entities. You are probably "rounding down" or trunc
-ating your values.
?`%%`
The soon-to-be but as yet unreleased version 3.0 of R will have 8 byte integers and this problem will then not arise until you go out beyond 2^2^((8*8)-1))-1. At the moment coercion to integer fails at that level:
> as.integer(2^((8*4)-1)-1)
[1] 2147483647
> as.integer(2^((8*8)-1)-1)
[1] NA
Warning message:
NAs introduced by coercion
So your first example may rerun the proper result but your second example may still fail.
来源:https://stackoverflow.com/questions/15369961/why-does-r-regard-large-number-as-even