Summary of the problem:
For some decimal values, when we convert the type from decimal to double, a small fraction is added to the result.
What makes it wors
The article What Every Computer Scientist Should Know About Floating-Point Arithmetic would be an excellent place to start.
The short answer is that floating-point binary arithmetic is necessarily an approximation, and it's not always the approximation you would guess. This is because CPUs do arithmetic in base 2, while humans (usually) do arithmetic in base 10. There are a wide variety of unexpected effects that stem from this.