Consider the following C code:
#include
int main(int argc, char* argv[])
{
const long double ld = 0.12345678901234567890123456789012345L
The format on your computer is indeed the Intel double extended-precision format, 80 bits wide, with 15-bit exponent and 64-bit mantissa.
Only 10 consecutive bytes of the memory are actually used of the storage. Intel manuals (Intel® 64 and IA-32 Architectures Software Developer’s Manual Combined Volumes: 1, 2A, 2B, 2C, 2D, 3A, 3B, 3C, 3D and 4) say the following:
When storing floating-point values in memory, half-precision values are stored in 2 consecutive bytes in memory; single-precision values are stored in 4 consecutive bytes in memory; double-precision values are stored in 8 consecutive bytes; and double extended-precision values are stored in 10 consecutive bytes.
However, the x86 Linux ABIs specify that full 16 bytes are actually consumed. This is possibly because a 10-byte value could only have a fundamental alignment requirement of 2 in arrays, which can cause peculiar issues.
Also, array indexing is easier with multiples of 16.
Most of the time this is a non-issue, as long double
s are usually used to minimize error in intermediate calculations and the result be then truncated to a double
.