What is the difference between a single precision floating point operation and double precision floating operation?
I\'m especially interested in practical terms in
According to the IEEE754 • Standard for floating point storage • 32 and 64 bit standards (single precision and double precision) • 8 and 11 bit exponent respectively • Extended formats (both mantissa and exponent) for intermediate results