What is the difference between a single precision floating point operation and double precision floating operation?
I\'m especially interested in practical terms in
All have explained in great detail and nothing I could add further. Though I would like to explain it in Layman's Terms or plain ENGLISH
1.9 is less precise than 1.99
1.99 is less precise than 1.999
1.999 is less precise than 1.9999
.....
A variable, able to store or represent "1.9" provides less precision than the one able to hold or represent 1.9999. These Fraction can amount to a huge difference in large calculations.