SQL Server float data type understanding

允我心安 提交于 2019-12-23 04:21:46

问题


https://msdn.microsoft.com/en-us/library/ms173773%28v=sql.110%29.aspx

After I insert a value (0.12346789123456789123456789) for example in the table that has a float type column, I query and get back 0.1234567891234568 which contains 17 digits. I have 3 questions

  1. How can I back track the binary representation of the input and output ? The document says it uses 53 bits as default. I am using SQL Server Management Studio and I don't know how to specify n value during declaration of my column type.

  2. The number 17 isn't included in the document, I wish to know where it comes from.

  3. In Big or Little Endian systems, I'd like to know how such an input is treated and translated into the output at the low-level byte system. If anyone knows an explanation, I would be thankful.


回答1:


DECLARE @i float = 0.1234567890123456789123456789
SELECT @i

0.123456789012346

That has 15 digits; and

Note that first 0 is not counted.

DECLARE @i float = 123456.1234567890123456789123456789
SELECT @i

123456.123456789

Also has 15 digits.

That is documented here that Precision is 15 digits



来源:https://stackoverflow.com/questions/31098903/sql-server-float-data-type-understanding

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!