Unsigned versus signed numbers as indexes

微笑、不失礼 提交于 2019-12-23 07:30:18

问题


Whats the rationale for using signed numbers as indexes in .Net?

In Python, you can index from the end of an array by sending negative numbers, but this is not the case in .Net. It's not easy for .Net to add such a feature later as it could break other code perhaps using special rules (yeah, a bad idea, but I guess it happens) on indexing.

Not that I have ever have needed to index arrays over 2,147,483,647 in size, but I really cannot understand why they choose signed numbers.

Can it be because it's more normal to use signed numbers in code?

Edit: I just found these links:

The perils of unsigned iteration in C/C++

Signed word lengths and indexes

Edit2: Ok, a couple of other good reasons from the thread Matthew Flaschen posted:

  • Historical reasons as it's a c-like language
  • Interop with c

回答1:


For simplicity of course. Do you like trouble doing size arithmetic with unsigned ints?




回答2:


It may be to the long tradition of using a value below 0 as an invalid index. Methods like String.IndexOf return -1 if the element is not found. Therefore, the return value must be signed. If index-consumers would require unsigned values, you would have to a) check and b) cast the value to use it. With signed indices, you just need the check.




回答3:


Unsigned isn't CLS compliant.




回答4:


The primary usefulness of unsigned numbers arises when composing larger numbers from smaller ones and vice versa. For example, if one receives four unsigned bytes from a connection and wishes to regard their value, taken as a whole, as a 32-bit integer, using unsigned types means one can simply say:

  value = byte0 | (byte1*256) | (byte2*65536) | (byte3*16777216);

By contrast, if the bytes were signed, an expression like the above would be more complicated.

I'm not sure I really see any reason for a language designed nowadays not to include unsigned versions of all types shorter than the longest signed integer type, with the semantics that all integer (meaning discrete-quantity-numerics, rather than any particular type) operations which will fit entirely within the largest signed type will by default be performed as though they were operating upon that type. Including an unsigned version of the largest signed type would complicate the language specification (since one would have to specify which operations must fit within range of the signed type, and which operations must fit within range of the unsigned type), but otherwise there should be no problem designing a language so that if (unsigned1 - unsigned2 > unsigned3) would yield a "numerically-correct" result even when unsigned2 is greater than unsigned1 [if one wants unsigned wraparound, one would explicitly specify if ((Uint32)(unsigned1 - unsigned2) > unsigned3)]. A language which specified such behavior would certainly be a big improvement over the mess that exist in C (justifiable, given its history), C#, or vb.net.



来源:https://stackoverflow.com/questions/3060057/unsigned-versus-signed-numbers-as-indexes

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!