Given the following snippet:
#include
typedef signed long long int64;
typedef signed int int32;
typedef signed char int8;
int main()
{
The problem is that the multiplication is int32 * int32, which is done as int32, and the result then assigned to an int64. You'd get much the same effect with double d = 3 / 2;, which would divide 3 by 2 using integer division, and assign 1.0 to d.
You have to pay attention to the type of an expression or subexpression whenever it may matter. This requires making sure the appropriate operation is calculated as the appropriate type, such as casting one of the multiplicands to int64, or (in my example) 3.0 / 2 or (float)3 / 2.