I\'m reading \"Understanding Linux Kernel\".
Paging for 64-bit Architectures
As we have seen in the previous sections,
"If we now decide to use only 48 of the 64 bits for addressing". Why? & Why only 48bits? Why not some other number?
System architects make tradeoffs. 256TB seems like more than enough room for 1 process's address space. Remember virtual address != physical address, and generally speaking, each process has its own address space.
As long as pointers are 64 bits, this is more of a performance capability issue than anything else. If & when 48 bits becomes a limitation, the OS could be tweaked to use more bits of the 64-bit address space without breaking application incompatibility. For now, the architects are just buying themselves a very comfortable amount of time.
It may have to do with processor-side virtual addressing capabilities, as many processors now have memory management units to handle the virtual -> physical memory mapping.
How to know the number of address pins (= size of address bus) for a processor. http://ark.intel.com specifications of a processor doesn't include this spec.
This is for the most part irrelevant. It's a way for a processor to implement various physical addressing schemes. A 64-bit processor could achieve external address/data buses for its complete address space with 64, 32, 16, 8, 4, 2, or 1 address pin if the bus is synchronous and the address bits get multiplexed in time. Again, virtual address != physical address; 64-bit virtual addressing could be implemented with 48-bit or 32-bit physical addresses (just that you would be limited to 248 or 232 words of memory).
update: if you really want to know, you have to look at the datasheet of each processor in question. E.g. Intel Core 2 Duo -- section 4.2 of the datasheet talks about the signals -- the address bus is 36-bits wide (but is really 33 signal lines; the data width is 64-bit = 8 bytes so the other 3 lines are probably unnecessary with proper data alignment)
Well, I'm just a regular PC user & programmer. Its just hard to believe for me that 32-bit addressing ie.. 4GB (2GB/3GB to be more correct) address space per process is a limit. If you really encountered this limit. Please give me example.
Two words: memory-mapped files.
I think the simplest answer is - moore's law.
Moore's law basically says that ICs halve in cost every 18 months. There are some ways of interpreting this: The amount of memory installed in a PC tends to double every 18 months. The effective speed doubles (at least if you take the cores * the MHz rather than just the MHz).
Anyway, weve just really run out of 32bit address space, so a jump from 32 - 48 means that, on the hardware side, we've allocated expansion space for about 16 iterations of Moore's law - which works out to about 20 years.
Im pretty sure that while some PCs might be pushed to the 10 year mark, 20 years of expansion headroom seems a decent tradeoff: Computers in 20 years time are going to be different - they won't be using the same CPUs and RAM busses, just as they were different 20 years ago. Designing more than 20 years worth of headroom into an interface is just silly over engineering that never going to see use anyway.
And its not so short that existing hardware runs a real risk of being obsoleted too soon.
Its just hard to believe for me that 32-bit addressing ie.. 4GB (2GB/3GB to be more correct) address space per process is a limit. If you really encountered this limit. Please give me example.
It's more efficient (quicker) to get data from RAM than to get it from disk.
The speed of SQL server depends partly on how much data (e.g. how many of its index and data pages) it's able to keep in RAM instead of on disk.
So, SQL databases (for example) may be faster on machines with more than 4GB of RAM.
The same is true for other types of server (e.g. file servers, HTTP proxies, etc.), which can be faster if they can have larger RAM caches.
None of these answers are right, the reason that OSs don't use the full 64-bits is because the page tables would be far larger (64-bit is already up to 3 levels of page tables), and there's no reason to pay the extra indirection needed, 48 bits is enough. 48-bits is also convenient because you get some extra bits to store flags in (pointer tagging)
Its just hard to believe for me that 32-bit addressing ie.. 4GB (2GB/3GB to be more correct) address space per process is a limit. If you really encountered this limit. Please give me example.
It doesn't exist any more (except on some old employees personal machines) but I worked on a suite of software called RealiMation back in the late 1990s/early 2000s. It was a real time 3D engine for visualisation and simulation. One of our customers regularly created highly detailed models that hit the 2GB memory limit. We would load textures on the fly as and when needed and had to add code to check for memory allocation failure so we could continue displaying the model, albeit untextured.
No current x86-64 design uses more than 48 bits for this -- so it's a convenient number to pick, and it's automatically the same limit on Windows, too.