Consider the following code:
double v1 = double.MaxValue;
double r = Math.Sqrt(v1 * v1);
r = double.MaxValue on 32-bit machine r = Infinity
This is a nigh duplicate of
Why does this floating-point calculation give different results on different machines?
My answer to that question also answers this one. In short: different hardware is allowed to give more or less accurate results depending on the details of the hardware.
How to prevent it from happening? Since the problem is on the chip, you have two choices. (1) Don't do any math in floating point numbers. Do all your math in integers. Integer math is 100% consistent from chip to chip. Or (2) require all your customers to use the same hardware as you develop on.
Note that if you choose (2) then you might still have problems; small details like whether a program was compiled debug or retail can change whether floating point calculations are done in extra precision or not. This can cause inconsistent results between debug and retail builds, which is also unexpected and confusing. If your requirement of consistency is more important than your requirement of speed then you'll have to implement your own floating point library that does all its calculations in integers.