I have a pretty basic question, but I am not sure if I understand the concept or not. Suppose we have:
int a = 1000000;
int b = 1000000;
long long c = a * b;
It's kind of absurd, because the assembler instruction does always compute
int * int -> 64 bits long
so if you look at the machine code, you see : imul that store 64bits into eax edx then cdq that put the bit sign of eax into edx (thus losing the full 64bits result) and then eax edx are stored into the 64bits variable
and if you convert the 32bits values into 64bits before the multiplication, you get a call to the 64bits multiplication function for no reason
(I checked : it's not the case when the code is optimized)