Looking at this C# code:
byte x = 1;
byte y = 2;
byte z = x + y; // ERROR: Cannot implicitly convert type \'int\' to \'byte\'
The result of
In addition to all the other great comments, I thought I would add one little tidbit. A lot of comments have wondered why int, long, and pretty much any other numeric type doesn't also follow this rule...return a "bigger" type in response to arithmatic.
A lot of answers have had to do with performance (well, 32bits is faster than 8bits). In reality, an 8bit number is still a 32bit number to a 32bit CPU....even if you add two bytes, the chunk of data the cpu operates on is going to be 32bits regardless...so adding ints is not going to be any "faster" than adding two bytes...its all the same to the cpu. NOW, adding two ints WILL be faster than adding two longs on a 32bit processor, because adding two longs requires more microops since you're working with numbers wider than the processors word.
I think the fundamental reason for causing byte arithmetic to result in ints is pretty clear and straight forward: 8bits just doesn't go very far! :D With 8 bits, you have an unsigned range of 0-255. That's not a whole lot of room to work with...the likelyhood that you are going to run into a bytes limitations is VERY high when using them in arithmetic. However, the chance that you're going to run out of bits when working with ints, or longs, or doubles, etc. is significantly lower...low enough that we very rarely encounter the need for more.
Automatic conversion from byte to int is logical because the scale of a byte is so small. Automatic conversion from int to long, float to double, etc. is not logical because those numbers have significant scale.