问题
Can we say that our 'traditional' way of writing in binary is Big Endian?
e.g., number 1 in binary:
0b00000001 // Let's assume its possible to write numbers like that in code and b means binary
Also when I write a constant 0b00000001
in my code, this will always refer to integer 1 regardless if machine is big endian or little endian right?
In this notation the LSB is always written as the last element from the right, and MSB is always written as the left most element right?
回答1:
Yes, humans generally write numerals in big-endian order (meaning that the digits written first have the most significant value), and common programming languages that accept numerals interpret them in the same way.
Thus, the numeral “00000001” means one; it never means one hundred million (in decimal) or 128 (in binary) or the corresponding values in other bases.
Much of C semantics is written in terms of the value of a number. Once a numeral is converted to a value, the C standard describes how that value is added, multiplied, and even represented as bits (with some latitude regarding signed values). Generally, the standard does not specify how those bits are stored in memory, which is where endianness in machine representations comes into play. When the bits representing a value are grouped into bytes and those bytes are stored in memory, we may see those bytes written in different orders on different machines.
However, the C standard specifies a common way of interpreting numerals in source code, and that interpretation is always big-endian in the sense that the most significant digits appear first.
回答2:
If you want to put it that way, then yes, we humans write numerals in Big-Endian order. But I think you have a misunderstanding in terms of your target runnign with big or little endian.
In your actual C-Code, it does not matter which endianess your target machine uses. For example these lines will always display the same, no matter the endianess of your system:
uint32 x = 0x0102;
printf("Output: %x\n",x); // Output: 102
or to take your example:
uint32 y = 0b0001;
printf("Output: %d\n",y); // Output: 1
However the storage of the data in your memory differs between Little and Big Endian.
Big Endian:
Actual Value: 0x01020304
Memory Address: 0x00 0x01 0x02 0x03
Value: 0x01 0x02 0x03 0x04
Little Endian:
Actual Value: 0x01020304
Memory Address: 0x00 0x01 0x02 0x03
Value: 0x04 0x03 0x02 0x01
Both times the actualy value is 0x01020304 (and this is what you assign in your C-Code).
You only have to worry about it, if you do memory operations. If you have a 4-Byte (uint8
) array, which represents a 32-Bit integer and you want to copy it into a uint32
variable you need to care.
uint8 arr[4] = {0x01, 0x02, 0x03, 0x04};
uint32 var;
memcpy(&var,arr,4);
printf("Output: %x\n",var);
// Big Endian: Output: 0x01020304
// Little Endian: Output: 0x04030201
来源:https://stackoverflow.com/questions/21259787/binary-notation-and-endianness