Why is network-byte-order defined to be big-endian? [closed]
As written in the heading, my question is, why does TCP/IP use big endian encoding when transmitting data and not the alternative little-endian scheme? RFC1700 stated it must be so . (and defined network byte order as big-endian). The convention in the documentation of Internet Protocols is to express numbers in decimal and to picture data in "big-endian" order [COHEN]. That is, fields are described left to right, with the most significant octet on the left and the least significant octet on the right. The reference they make is to On Holy Wars and a Plea for Peace Cohen, D. Computer The