Do C#/.NET floating point operations differ in precision between debug mode and release mode?
They can indeed be different. According to the CLR ECMA specification:
Storage locations for floating-point numbers (statics, array elements, and fields of classes) are of fixed size. The supported storage sizes are float32 and float64. Everywhere else (on the evaluation stack, as arguments, as return types, and as local variables) floating-point numbers are represented using an internal floating-point type. In each such instance, the nominal type of the variable or expression is either R4 or R8, but its value can be represented internally with additional range and/or precision. The size of the internal floating-point representation is implementation-dependent, can vary, and shall have precision at least as great as that of the variable or expression being represented. An implicit widening conversion to the internal representation from float32 or float64 is performed when those types are loaded from storage. The internal representation is typically the native size for the hardware, or as required for efficient implementation of an operation.
What this basically means is that the following comparison may or may not be equal:
class Foo
{
double _v = ...;
void Bar()
{
double v = _v;
if( v == _v )
{
// Code may or may not execute here.
// _v is 64-bit.
// v could be either 64-bit (debug) or 80-bit (release) or something else (future?).
}
}
}
Take-home message: never check floating values for equality.