With the following code:
int main(){
printf(\"%f\\n\",multiply(2));
return 0;
}
float multiply(float n){
return n * 2;
}
When
So the invented prototype would be "int multiply(int)", and hence the errors. Is this correct?
Absolutely. This is done for backward compatibility with pre-ANSI C that lacked function prototypes, and everything declared without a type was implicitly int
. The compiler compiles your main
, creates an implicit definition of int multiply(int)
, but when it finds the real definition, it discovers the lie, and tells you about it.
How come when I break the code into 2 files it compiles?
The compiler never discovers the lie about the prototype, because it compiles one file at a time: it assumes that multiply
takes an int
, and returns an int
in your main
, and does not find any contradictions in multiply.c
. Running this program produces undefined behavior, though.
Once I run the program above (the version split into 2 files) the result is that 0.0000 is printed on the screen.
That's the result of undefined behavior described above. The program will compile and link, but because the compiler thinks that multiply
takes an int
, it would never convert 2
to 2.0F
, and multiply
will never find out. Similarly, the incorrect value computed by doubling an int
reinterpreted as a float
inside your multiply
function will be treated as an int
again.