Can somebody please explain the following behavior:
static void Main(string[] args)
{
checked
{
double d = -1d + long.Min
You are calculating with double values (-1d). Floating point numbers do not throw on .NET. checked does not have influence on them in any way.
But the conversion back to long is influenced by checked. one + long.MaxValue does not fit into the range of double. -one + long.MinValue does fit into that range. The reason for that is that signed integers have more negative numbers than positive numbers. long.MinValue has no positve equivalent. That's why the negative version of your code happens to fit and the positive version does not fit.
The addition operation does not change anything:
Debug.Assert((double)(1d + long.MaxValue) == (double)(0d + long.MaxValue));
Debug.Assert((double)(-1d + long.MinValue) == (double)(-0d + long.MinValue));
The numbers we are calculating are outside of the range where double is precise. double can fit integers up to 2^53 precisely. We have rounding errors here. Adding one is the same as adding zero. Essentially, you are computing:
var min = (long)(double)(long.MinValue); //does not overflow
var max = (long)(double)(long.MaxValue); //overflows (compiler error)
The add operation is a red herring. It does not change anything.