Consider the following C code:
#include
int main(int argc, char* argv[])
{
const long double ld = 0.12345678901234567890123456789012345L
The long double format in your C implementation uses an Intel format with a one-bit sign, a 15-bit exponent, and a 64-bit significand (ten bytes total). The compiler allocates 16 bytes for it, which is wasteful but useful for some things such as alignment. However, the 64 bits provide only log10(264) digits of significance, which is about 20 digits.