What is the difference between a single precision floating point operation and double precision floating operation?
I\'m especially interested in practical terms in
First of all float and double are both used for representation of numbers fractional numbers. So, the difference between the two stems from the fact with how much precision they can store the numbers.
For example: I have to store 123.456789 One may be able to store only 123.4567 while other may be able to store the exact 123.456789.
So, basically we want to know how much accurately can the number be stored and is what we call precision.
Quoting @Alessandro here
The precision indicates the number of decimal digits that are correct, i.e. without any kind of representation error or approximation. In other words, it indicates how many decimal digits one can safely use.
Float can accurately store about 7-8 digits in the fractional part while Double can accurately store about 15-16 digits in the fractional part
So, double can store double the amount of fractional part as of float. That is why Double is called double the float