I know the 0.1 decimal number cannot be represented exactly with a finite binary number (explanation), so double n = 0.1 will lose some precision a
Floating point systems do various magic including having a few extra bits of precision for rounding. Thus the very small error due to the inexact representation of 0.1 ends up getting rounded off to 0.5.
Think of floating point as being a great but INEXACT way to represent numbers. Not all possible numbers are easily represented in a computer. Irrational numbers like PI. Or like SQRT(2). (Symbolic math systems can represent them, but I did say "easily".)
The floating point value may be extremely close, but not exact. It may be so close that you could navigate to Pluto and be off by millimeters. But still not exact in a mathematical sense.
Don't use floating point when you need to be exact rather than approximate. For example, accounting applications want to keep exact track of a certain number of pennies in an account. Integers are good for that because they are exact. The primary issue you need to watch for with integers is overflow.
Using BigDecimal for currency works well because the underlying representation is an integer, albeit a big one.
Recognizing that floating point numbers are inexact, they still have a great many uses. Coordinate systems for navigation or coordinates in graphics systems. Astronomical values. Scientific values. (You probably cannot know the exact mass of a baseball to within a mass of an electron anyway, so inexactness doesn't really matter.)
For counting applications (including accounting) use integer. For counting the number of people that pass through a gate, use int or long.