ASN Basic Encoding Rule of an integer

三世轮回 提交于 2020-01-15 10:37:32

问题


I'm currently studying the Abstract Syntax Notation One and reading the ITU-T Recommendation X.690.

On page 15 in paragraph 8.3.2, there is written:

If the contents octets of an integer value encoding consist of more than one octet, then the bits of the first octet and bit 8 of the second octet:

  1. shall not all be ones; and
  2. shall not all be zero.

NOTE – These rules ensure that an integer value is always encoded in the smallest possible number of octets.

I understand that for the integer to be always encoded in the smallest possible number of octet, the first octet shall not be zero.

But what about ones? If I want to encode the value 65408 (1111 1111 1000 0000) using the Basic Encoding Rules, how should I do it?


回答1:


I understand that for the integer to be always encoded in the smallest possible number of octet, the first octet shall not be zero.

Not necessary. If the highest bit of the integer is set to 1, then the value is considered negative (in the case of signed integers). In order to denote the integer positive -- a zero (0) leading octet is added. It is in general.

Here is a good article about Integer encoding: http://msdn.microsoft.com/en-us/library/windows/desktop/bb540806(v=vs.85).aspx




回答2:


The encoding is 2's complement. You need a leading octet of 0000 0000. Note that this will not violate the rule you quote, as bit 8 of the second octet will be a 1.



来源:https://stackoverflow.com/questions/25617796/asn-basic-encoding-rule-of-an-integer

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!