Why is network-byte-order defined to be big-endian? [closed]
问题 As written in the heading, my question is, why does TCP/IP use big endian encoding when transmitting data and not the alternative little-endian scheme? 回答1: RFC1700 stated it must be so . (and defined network byte order as big-endian). The convention in the documentation of Internet Protocols is to express numbers in decimal and to picture data in "big-endian" order [COHEN]. That is, fields are described left to right, with the most significant octet on the left and the least significant