Considering an int
will be 4 bytes on a 32-bit system and 8 bytes on a 64-bit system, why is float
not treated the same? Why is size of a double
!= size of a float
on a 64-bit system? Considering that the best native integer type is selected when I declare an int
(which results in higher performance), shouldn't the same happen for float
(which also results in a performance increase)?
Related question: Is it a bad idea to declare a type my_float
(pardon the name!) that is float
on 32-bit systems and double
on 64-bit systems?