I have the following column specified in a database: decimal(5,2)
How does one interpret this?
According to the properties on the column as viewed in SQL Ser
Precision of a number is the number of digits.
Scale of a number is the number of digits after the decimal point.
What is generally implied when setting precision and scale on field definition is that they represent maximum values.
Example, a decimal field defined with precision=5 and scale=2 would allow the following values:
123.45 (p=5,s=2)12.34 (p=4,s=2)12345 (p=5,s=0)123.4 (p=4,s=1)0 (p=0,s=0)The following values are not allowed or would cause a data loss:
12.345 (p=5,s=3) => could be truncated into 12.35 (p=4,s=2)1234.56 (p=6,s=2) => could be truncated into 1234.6 (p=5,s=1)123.456 (p=6,s=3) => could be truncated into 123.46 (p=5,s=2)123450 (p=6,s=0) => out of rangeNote that the range is generally defined by the precision: |value| < 10^p ...