问题
https://msdn.microsoft.com/en-us/library/ms173773%28v=sql.110%29.aspx
After I insert a value (0.12346789123456789123456789)
for example in the table that has a float type column, I query and get back 0.1234567891234568
which contains 17
digits. I have 3 questions
How can I back track the binary representation of the input and output ? The document says it uses 53 bits as default. I am using SQL Server Management Studio and I don't know how to specify
n
value during declaration of my column type.The number
17
isn't included in the document, I wish to know where it comes from.In Big or Little Endian systems, I'd like to know how such an input is treated and translated into the output at the low-level byte system. If anyone knows an explanation, I would be thankful.
回答1:
DECLARE @i float = 0.1234567890123456789123456789
SELECT @i
0.123456789012346
That has 15 digits; and
Note that first
0
is not counted.
DECLARE @i float = 123456.1234567890123456789123456789
SELECT @i
123456.123456789
Also has 15 digits.
That is documented here that Precision is 15 digits
来源:https://stackoverflow.com/questions/31098903/sql-server-float-data-type-understanding