I understand that floating point calculations have accuracy issues and there are plenty of questions explaining why. My question is if I run the same calculation twice, can
The short answer is that FP calculations are entirely deterministic, as per the IEEE Floating Point Standard, but that doesn't mean they're entirely reproducible across machines, compilers, OS's, etc.
The long answer to these questions and more can be found in what is probably the best reference on floating point, David Goldberg's What Every Computer Scientist Should Know About Floating Point Arithmetic. Skip to the section on the IEEE standard for the key details.
To answer your bullet points briefly:
Time between calculations and state of the CPU have little to do with this.
Hardware can affect things (e.g. some GPUs are not IEEE floating point compliant).
Language, platform, and OS can also affect things. For a better description of this than I can offer, see Jason Watkins's answer. If you are using Java, take a look at Kahan's rant on Java's floating point inadequacies.
Solar flares might matter, hopefully infrequently. I wouldn't worry too much, because if they do matter, then everything else is screwed up too. I would put this in the same category as worrying about EMP.
Finally, if you are doing the same sequence of floating point calculations on the same initial inputs, then things should be replayable exactly just fine. The exact sequence can change depending on your compiler/os/standard library, so you might get some small errors this way.
Where you usually run into problems in floating point is if you have a numerically unstable method and you start with FP inputs that are approximately the same but not quite. If your method's stable, you should be able to guarantee reproducibility within some tolerance. If you want more detail than this, then take a look at Goldberg's FP article linked above or pick up an intro text on numerical analysis.