What is the difference between signed and unsigned int?
Sometimes we know in advance that the value stored in a given integer variable will always be positive-when it is being used to only count things, for example. In such a case we can declare the variable to be unsigned, as in, unsigned int num student;. With such a declaration, the range of permissible integer values (for a 32-bit compiler) will shift from the range -2147483648 to +2147483647 to range 0 to 4294967295. Thus, declaring an integer as unsigned almost doubles the size of the largest possible value that it can otherwise hold.