I understand that floating point calculations have accuracy issues and there are plenty of questions explaining why. My question is if I run the same calculation twice, can
I think your confusion lies in the type of inaccuracy around floating point. Most languages implement the IEEE floating point standard This standard lays out how individual bits within a float/double are used to produce a number. Typically a float consists of a four bytes, and a double eight bytes.
A mathmatical operation between two floating point numbers will have the same value every single time (as specified within the standard).
The inaccuracy comes in the precision. Consider an int vs a float. Both typically take up the same number of bytes (4). Yet the maximum value each number can store is wildly different.
The difference is in the middle. int, can represent every number between 0 and roughly 2 billion. Float however cannot. It can represent 2 billion values between 0 and 3.40282347E38. But that leaves a whole range of values that cannot be represented. If a math equation hits one of these values it will have to be rounded out to a representable value and is hence considered "inaccurate". Your definition of inaccurate may vary :).