What is the difference between a single precision floating point operation and double precision floating operation?
I\'m especially interested in practical terms in
Single precision number uses 32 bits, with the MSB being sign bit, whereas double precision number uses 64bits, MSB being sign bit
Single precision
SEEEEEEEEFFFFFFFFFFFFFFFFFFFFFFF.(SIGN+EXPONENT+SIGNIFICAND)
Double precision:
SEEEEEEEEEEEFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF.(SIGN+EXPONENT+SIGNIFICAND)