I\'ve been told that C types are machine dependent. Today I wanted to verify it.
void legacyTypes()
{
/* character types */
char k_char = \'a\';
Real compilers don't usually take advantage of all the variation allowed by the standard. The requirements in the standard just give a minimum range for the type -- 8 bits for char, 16 bits for short and int, 32 bits for long, and (in C99) 64 bits for long long (and every type in that list must have at least as large a range as the preceding type).
For a real compiler, however, backward compatibility is almost always a major goal. That means they have a strong motivation to change as little as they can get away with. As a result, in practice, there's a great deal more commonality between compilers than the standard requires.
What exactly does "Variable types are machine dependent mean?"
It means exactly what it says: The sizes of most integral C types are machine-dependent (not really machine so much as architecture and compiler). When I was doing a lot of C in the early 90s, int was mostly 16 bits; now it's mostly 32 bits. Earlier than my C career, it may have been 8 bits. Etc.
Apparently the designers of the C compiler you're using for 64-bit compilation decided int should remain 32 bits. Designers of a different C compiler might make a different choice.
There are a lot more platforms out there, and some of them are 16 or even 8 bit! On these, you would observe much bigger differences in the sizes of all the above types.
Signed and unsigned versions of the same basic type occupy the same number of bytes on any platform, however their range of numbers is different since for a signed number the same range of possible values is shared between the signed and unsigned realm.
E.g. a 16 bit signed int can have values from -32767 (or -32768 on many platforms) to 32767. An unsigned int of the same size is in the range 0 to 65535.
After this, hopefully you understand the point of the referred question better. Basically if you write a program assuming that e.g. your signed int variables will be able to hold the value 2*10^9 (2 billion), your program is not portable, because on some platforms (16 bits and below) this value will cause an overflow, resulting in silent and hard to find bugs. So e.g. on a 16 bit platform you need to #define your ints to be long in order to avoid overflow. This is a simple example, which may not work across all platforms, but I hope it gives you a basic idea.
The reason for all these differences between platforms is that by the time C got standardized, there was already many C compilers used on a plethora of different platforms, so for backward compatibility, all these varieties had to be accepted as valid.
Machine dependent is not quite exact. Actually, it's implementation-defined. It may depend on compiler, machine, compiler options etc.
For example, using Visual C++, long would be 32 bit even on 64 bit machines.
Here is something one another implementation -- quite different of what you are used to, but one which is still present on the Internet today even if it is no more used for general purpose computing excepted by retro-computing hobbyists -- None of the sizes are the same as yours :
@type sizes.c
#include <stdio.h>
#include <limits.h>
int main()
{
printf("CHAR_BIT = %d\n", CHAR_BIT);
printf("sizeof(char) = %d\n", sizeof(char));
printf("sizeof(short) = %d\n", sizeof(short));
printf("sizeof(int) = %d\n", sizeof(int));
printf("sizeof(long) = %d\n", sizeof(long));
printf("sizeof(float) = %d\n", sizeof(float));
printf("sizeof(double) = %d\n", sizeof(double));
return 0;
}
@run sizes.exe
CHAR_BIT = 9
sizeof(char) = 1
sizeof(short) = 2
sizeof(int) = 4
sizeof(long) = 4
sizeof(float) = 4
sizeof(double) = 8
If you were to repeat your test on, say, a Motorola 68000 processor, you'd find you'd get different results (with a word being 16bit, and a long being 32 -- typically an int is a word)