Does casting double to float always produce same result, or can there be some \"rounding differences\"?
For example, is x in>
If you downcast a double to a float, you are losing precision and data. Upcasting a float to a double is a widening conversion; no data is lost if it is then round-tripped...that is, unless you do something to the value prior to downcasting it back to a float.
Floating-point numbers sacrifice precision and accuracy for range. Single-precision floats give you 32-bits of precision; double-precision give you 64-bits. But they can represent values way outside the bounds that the underlying precision would indicate.
C# float and double are IEEE 754 floating point values.
float is a single-precision IEEE 754 value (32 bits) and consists of a
double is double-precision IEEE 754 value (64 bits) and consists of a
The effective precision of the mantissa is 1-bit more than its apparent size (floating point magick).
Some CLR floating point resources for you:
This paper is probably the canonical paper on the perils and pitfalls of floating point arithmetic. If you're not a member of the ACM, click the link on the title to find public downloads of the article:
Abstract
Floating-point arithmetic is considered as esoteric subject by many people. This is rather surprising, because floating-point is ubiquitous in computer systems: Almost every language has a floating-point datatype; computers from PCs to supercomputers have floating-point accelerators; most compilers will be called upon to compile floating-point algorithms from time to time; and virtually every operating system must respond to floating-point exceptions such as overflow. This paper presents a tutorial on the aspects of floating-point that have a direct impact on designers of computer systems. It begins with background on floating-point representation and rounding error, continues with a discussion of the IEEE floating point standard, and concludes with examples of how computer system builders can better support floating point.