Simple question - why does the Decimal type define these constants? Why bother?
I\'m looking for a reason why this is defined by the language, not possible uses or effec
Some .NET languages do not support decimal literals, and it is more convenient (and faster) in these cases to write Decimal.ONE instead of new Decimal(1).
Java's BigInteger class has ZERO and ONE as well, for the same reason.