Why don't languages raise errors on integer overflow by default?
In several modern programming languages (including C++, Java, and C#), the language allows integer overflow to occur at runtime without raising any kind of error condition. For example, consider this (contrived) C# method, which does not account for the possibility of overflow/underflow. (For brevity, the method also doesn't handle the case where the specified list is a null reference.) //Returns the sum of the values in the specified list. private static int sumList(List<int> list) { int sum = 0; foreach (int listItem in list) { sum += listItem; } return sum; } If this method is called as