Given the following snippet:
#include
typedef signed long long int64;
typedef signed int int32;
typedef signed char int8;
int main()
{
a * b
is calculated as an int, and then cast to the receiving variable type (which just happens to be int)
d * e
is calculated as an int, and then cast to the receiving variable type (which just happens to be int64)
If either of the type variables were larger that an int (or were floating point), than that type would have been used. But since all the types used in the multiplies were int or smaller, ints were used.