How does the actual behaviour of this program differ from its expected behaviour?
-The actual behaviour of this program is adding up IEEE representation of 0.0001 up 10000 times; IEEE representation of 0.0001 != actual 0.0001
Why is the expected behaviour not seen?
-We assume 0.0001 is exactly represented as 0.0001, in realty it isn't since IEEE floating points can't represent 0.0001 exactly due to having to represent all floating points in base2 instead of base10.
While ensuring that the program semantics stay the same, what changes would you make to this program to ensure that the expected and the actual behaviour do match?
-Changing float to double will work in this case, because double gives you more decimal precision than float.
-Alternative solution is to keep float and instead of doing summation, you assign y = 10000*x (this incurs less error and it's better when you're looking to avoid roundoff and approximation errors)