Years ago I learned the hard way about precision problems with floats so I quit using them. However, I still run into code using floats and it make me cringe because I know som
There are many cases you would want to use a float. What I don't understand however, is what you can use instead. If you mean using double instead of float, then yeah, in most cases, you want to do that. However, double will also have precision issues. You should use decimal whenever the accuracy is important.
float and double are very useful in many applications. decimal is an expensive data type and its range (the magnitude of the largest number it can represent) is less than double. Computers usually have special hardware level support for those data types. They are used a lot in scientific computing. Basically, they are primary fractional data types you want to use. However, in monetary calculations, where precision is extremely important, decimal is the way to go.