While \"we all know\" that x == y can be problematic, where x and y are floating point values, this question is a bit more specific:>
My understanding of floating point arithmetic calculations is that they are handled by the CPU, which solely determines your precision. Therefore there is no definite value above which floats lose precision.
I had thought that the x86 architecture, for instance, guaranteed a minimum, but I've been proven wrong.