How is the decimal
type implemented?
Update
From "CLR via C#" 3rd Edition by J.Richter:
A 128-bit high-precision floating-point value commonly used for financial calculations in which rounding errors can’t be tolerated. Of the 128 bits, 1 bit represents the sign of the value, 96 bits represent the value itself, and 8 bits represent the power of 10 to divide the 96-bit value by (can be anywhere from 0 to 28). The remaining bits are unused.