Exactly what quantdev said. The people who wrote the C language thought something along the lines of "hey let's just make the dividend of any two integers an integer, because integers are super useful and maybe getting a float would mess with your style when you're trying to index an array". So the C Compiler proceeds to toss the remainder into the garbage and you're left with 0.
1 / 2
To declare that you want your darn double, ( or float ) you better make one of the two numbers in the division a float! Thus,
1 / 2.0
and, in context...
not what you want:
printf("%f", ( 1 / 2 ) ); ,
what you want:
printf("%f", ( 1 / 2.0 ) ); ,