I understand that floating point calculations have accuracy issues and there are plenty of questions explaining why. My question is if I run the same calculation twice, can
Sorry, but I can't help thinking that everybody is missing the point.
If the inaccuracy is significant to what you are doing then you should look for a different algorithm.
You say that if the calculations are not accurate, errors at the start may have huge implications by the end of the simulation.
That my friend is not a simulation. If you are getting hugely different results due to tiny differences due to rounding and precision then the chances are that none of the results has any validity. Just because you can repeat the result does not make it any more valid.
On any non-trivial real world problem that includes measurements or non-integer calculation, it is always a good idea to introduce minor errors to test how stable your algorithm is.
Also, while Goldberg is a great reference, the original text is also wrong: IEEE754 is not gaurenteed to be portable. I can't emphasize this enough given how often this statement is made based on skimming the text. Later versions of the document include a section that discusses this specifically:
Many programmers may not realize that even a program that uses only the numeric formats and operations prescribed by the IEEE standard can compute different results on different systems. In fact, the authors of the standard intended to allow different implementations to obtain different results.
Since your question is tagged C#, it's worth emphasising the issues faced on .NET:
(a + b) + c
is not guaranteed to equal a + (b + c)
;This means, that you shouldn't rely upon your .NET application producing the same floating point calculation results when run on different versions of the .NET CLR.
For example, in your case, if you record the initial state and inputs to your simulation, then install a service pack that updates the CLR, your simulation may not replay identically the next time you run it.
See Shawn Hargreaves's blog post Is floating point math deterministic? for further discussion relevant to .NET.
This answer in the C++ FAQ probably describes it the best:
http://www.parashift.com/c++-faq-lite/newbie.html#faq-29.18
It is not only that different architectures or compiler might give you trouble, float pointing numbers already behave in weird ways within the same program. As the FAQ points out if y == x
is true, that can still mean that cos(y) == cos(x)
will be false. This is because the x86 CPU calculates the value with 80bit, while the value is stored as 64bit in memory, so you end up comparing a truncated 64bit value with a full 80bit value.
The calculation are still deterministic, in the sense that running the same compiled binary will give you the same result each time, but the moment you adjust the source a bit, the optimization flags or compile it with a different compiler all bets are off and anything can happen.
Practically speaking, I it is not quite that bad, I could reproduce simple float pointing math with different version of GCC on 32bit Linux bit for bit, but the moment I switched to 64bit Linux the result were no longer the same. Demos recordings created on 32bit wouldn't work on 64bit and vice versa, but would work fine when run on the same arch.
From what I understand you're only guaranteed identical results provided that you're dealing with the same instruction set and compiler, and that any processors you run on adhere strictly to the relevant standards (ie IEEE754). That said, unless you're dealing with a particularly chaotic system any drift in calculation between runs isn't likely to result in buggy behavior.
Specific gotchas that I'm aware of:
some operating systems allow you to set the mode of the floating point processor in ways that break compatibility.
floating point intermediate results often use 80 bit precision in register, but only 64 bit in memory. If a program is recompiled in a way that changes register spilling within a function, it may return different results compared to other versions. Most platforms will give you a way to force all results to be truncated to the in memory precision.
standard library functions may change between versions. I gather that there are some not uncommonly encountered examples of this in gcc 3 vs 4.
The IEEE itself allows some binary representations to differ... specifically NaN values, but I can't recall the details.
The short answer is that FP calculations are entirely deterministic, as per the IEEE Floating Point Standard, but that doesn't mean they're entirely reproducible across machines, compilers, OS's, etc.
The long answer to these questions and more can be found in what is probably the best reference on floating point, David Goldberg's What Every Computer Scientist Should Know About Floating Point Arithmetic. Skip to the section on the IEEE standard for the key details.
To answer your bullet points briefly:
Time between calculations and state of the CPU have little to do with this.
Hardware can affect things (e.g. some GPUs are not IEEE floating point compliant).
Language, platform, and OS can also affect things. For a better description of this than I can offer, see Jason Watkins's answer. If you are using Java, take a look at Kahan's rant on Java's floating point inadequacies.
Solar flares might matter, hopefully infrequently. I wouldn't worry too much, because if they do matter, then everything else is screwed up too. I would put this in the same category as worrying about EMP.
Finally, if you are doing the same sequence of floating point calculations on the same initial inputs, then things should be replayable exactly just fine. The exact sequence can change depending on your compiler/os/standard library, so you might get some small errors this way.
Where you usually run into problems in floating point is if you have a numerically unstable method and you start with FP inputs that are approximately the same but not quite. If your method's stable, you should be able to guarantee reproducibility within some tolerance. If you want more detail than this, then take a look at Goldberg's FP article linked above or pick up an intro text on numerical analysis.