Why do I need to put 3.14f instead of 3.14 to disable all those warnings ? Is there a coherent reason reason for this ?
This is not peculiar to MSVC, it is required by the language standard.
I would suggest that it made sense not to reduce precision unless explicitly requested, so the default is double.
The 6 significant digits of precision that a single-precision float provides is seldom sufficient for general use and certainly on a modern desktop processor would be used as a hand coded optimisation where the writer has determined that it is sufficient and necessary; so it makes sense that an explicit visible marker is required to specify a single-precision literal.