I am reading a C book, talking about ranges of floating point, the author gave the table:
Type Smallest Positive Value Largest value Precision
====
The values for the float data type come from having 32 bits in total to represent the number which are allocated like this:
1 bit: sign bit
8 bits: exponent p
23 bits: mantissa
The exponent is stored as p + BIAS where the BIAS is 127, the mantissa has 23 bits and a 24th hidden bit that is assumed 1. This hidden bit is the most significant bit (MSB) of the mantissa and the exponent must be chosen so that it is 1.
This means that the smallest number you can represent is 01000000000000000000000000000000 which is 1x2^-126 = 1.17549435E-38.
The largest value is 011111111111111111111111111111111, the mantissa is 2 * (1 - 1/65536) and the exponent is 127 which gives (1 - 1 / 65536) * 2 ^ 128 = 3.40277175E38.
The same principles apply to double precision except the bits are:
1 bit: sign bit
11 bits: exponent bits
52 bits: mantissa bits
BIAS: 1023
So technically the limits come from the IEEE-754 standard for representing floating point numbers and the above is how those limits come about