I have the value 1555.4899999999998
stored in a float
column with default precision (53). When I do a simple select
, SSMS rounds the
Is there a reason you would rather use a float type than a decimal type? Floats are stored as fractions, which causes them to often be slightly innacurate when doing operations on them. This is okay when you have a graphics application where the innaccuracy is much less significant than the size of a pixel, but it's a huge issue in something like an accounting application where you're dealing with money.
I would venture to say that the accuracy of a decimal is more important to most applications than any benefit in speed or size they would get from using a float.