internal-representation

int((0.1+0.7)*10) = 7 in several languages. How to prevent this?

本小妞迷上赌 提交于 2019-12-17 04:01:53
问题 Recently I came across a bug/feature in several languages. I have a very basic knowledge about how it's caused (and I'd like some detailed explanation), but when I think of all the bugs I must have made over the years, the question is how can I determine " Hey, this might cause a riddiculous bug, I'd better use arbitrary precision functions ", what other languages do have this bug (and those who don't, why ). Also, why 0.1+0.7 does this and i.e. 0.1+0.3 doesn't, are there any other well-known

int((0.1+0.7)*10) = 7 in several languages. How to prevent this?

大城市里の小女人 提交于 2019-12-17 04:01:25
问题 Recently I came across a bug/feature in several languages. I have a very basic knowledge about how it's caused (and I'd like some detailed explanation), but when I think of all the bugs I must have made over the years, the question is how can I determine " Hey, this might cause a riddiculous bug, I'd better use arbitrary precision functions ", what other languages do have this bug (and those who don't, why ). Also, why 0.1+0.7 does this and i.e. 0.1+0.3 doesn't, are there any other well-known

Boolean true - positive 1 or negative 1?

£可爱£侵袭症+ 提交于 2019-11-30 09:02:16
I'm designing a language, and trying to decide whether true should be 0x01 or 0xFF. Obviously, all non-zero values will be converted to true, but I'm trying to decide on the exact internal representation. What are the pros and cons for each choice? 0 is false because the processor has a flag that is set when a register is set to zero. No other flags are set on any other value (0x01, 0xff, etc) - but the zero flag is set to false when there's a non-zero value in the register. So the answers here advocating defining 0 as false and anything else as true are correct. If you want to "define" a

Boolean true - positive 1 or negative 1?

北慕城南 提交于 2019-11-29 13:22:13
问题 I'm designing a language, and trying to decide whether true should be 0x01 or 0xFF. Obviously, all non-zero values will be converted to true, but I'm trying to decide on the exact internal representation. What are the pros and cons for each choice? 回答1: 0 is false because the processor has a flag that is set when a register is set to zero. No other flags are set on any other value (0x01, 0xff, etc) - but the zero flag is set to false when there's a non-zero value in the register. So the

How does R represent NA internally?

寵の児 提交于 2019-11-29 08:11:09
R seems to support an efficient NA value in floating point arrays. How does it represent it internally? My (perhaps flawed) understanding is that modern CPUs can carry out floating point calculations in hardware, including efficient handling of Inf, -Inf and NaN values. How does NA fit into this, and how is it implemented without compromising performance? R uses NaN values as defined for IEEE floats to represent NA_real_ , Inf and NA . We can use a simple C++ function to make this explicit: Rcpp::cppFunction('void print_hex(double x) { uint64_t y; static_assert(sizeof x == sizeof y, "Size does

How does R represent NA internally?

我的未来我决定 提交于 2019-11-28 01:45:18
问题 R seems to support an efficient NA value in floating point arrays. How does it represent it internally? My (perhaps flawed) understanding is that modern CPUs can carry out floating point calculations in hardware, including efficient handling of Inf, -Inf and NaN values. How does NA fit into this, and how is it implemented without compromising performance? 回答1: R uses NaN values as defined for IEEE floats to represent NA_real_ , Inf and NA . We can use a simple C++ function to make this

What is the internal representation of datetime in sql server?

随声附和 提交于 2019-11-27 06:48:11
问题 What is the underlying datastructure of datetime values stored in SQL Server (2000 and 2005 if different)? Ie down to the byte representation? Presumably the default representation you get when you select a datetime column is a culture specific value / subject to change. That is, some underlying structure that we don't see is getting formatted to YYYY-MM-DD HH:MM:SS.mmm. Reason I ask is that there's a generally held view in my department that it's stored in memory literally as YYYY-MM-DD HH

Why prefer two's complement over sign-and-magnitude for signed numbers?

家住魔仙堡 提交于 2019-11-26 04:02:01
问题 I\'m just curious if there\'s a reason why in order to represent -1 in binary, two\'s complement is used: flipping the bits and adding 1? -1 is represented by 11111111 (two\'s complement) rather than (to me more intuitive) 10000001 which is binary 1 with first bit as negative flag. Disclaimer: I don\'t rely on binary arithmetic for my job! 回答1: It's done so that addition doesn't need to have any special logic for dealing with negative numbers. Check out the article on Wikipedia. Say you have