Why can't OS use entire 64-bits for addressing? Why only the 48-bits?

…衆ロ難τιáo~ 提交于 2019-12-18 02:15:29

问题


I'm reading "Understanding Linux Kernel".

Paging for 64-bit Architectures

As we have seen in the previous sections, two-level paging is commonly used by 32-bit microprocessors. Two-level paging, however, is not suitable for computers that adopt a 64-bit architecture. Let's use a thought experiment to explain why:

Start by assuming a standard page size of 4 KB. Because 1 KB covers a range of 210 addresses, 4 KB covers 212 addresses, so the Offset field is 12 bits. This leaves up to 52 bits of the linear address to be distributed between the Table and the Directory fields. If we now decide to use only 48 of the 64 bits for addressing (this restriction leaves us with a comfortable 256 TB address space!), the remaining 48-12 = 36 bits will have to be split among Table and the Directory fields. If we now decide to reserve 18 bits for each of these two fields, both the Page Directory and the Page Tables of each process should include 218 entries that is, more than 256,000 entries.

  1. "If we now decide to use only 48 of the 64 bits for addressing". Why? & Why only 48 bits? Why not some other number?

  2. Well, I'm just a regular PC user & programmer. Its just hard to believe for me that 32-bit addressing i.e. 4GB (2GB/3GB to be more correct) address space per process is a limit. If you really encountered this limit. Please give me example.

  3. What is this limit for windows?

  4. I know that virtual memory != physical memory & processor address pins have nothing to do with virtual memory. This is a completely different question. How to know the number of address pins (= size of address bus) for a processor. http://ark.intel.com specifications of a processor doesn't include this spec.

Answer:

See Paul Betts's answer for reasonable answer for 1st question.


回答1:


"If we now decide to use only 48 of the 64 bits for addressing". Why? & Why only 48bits? Why not some other number?

System architects make tradeoffs. 256TB seems like more than enough room for 1 process's address space. Remember virtual address != physical address, and generally speaking, each process has its own address space.

As long as pointers are 64 bits, this is more of a performance capability issue than anything else. If & when 48 bits becomes a limitation, the OS could be tweaked to use more bits of the 64-bit address space without breaking application incompatibility. For now, the architects are just buying themselves a very comfortable amount of time.

It may have to do with processor-side virtual addressing capabilities, as many processors now have memory management units to handle the virtual -> physical memory mapping.

How to know the number of address pins (= size of address bus) for a processor. http://ark.intel.com specifications of a processor doesn't include this spec.

This is for the most part irrelevant. It's a way for a processor to implement various physical addressing schemes. A 64-bit processor could achieve external address/data buses for its complete address space with 64, 32, 16, 8, 4, 2, or 1 address pin if the bus is synchronous and the address bits get multiplexed in time. Again, virtual address != physical address; 64-bit virtual addressing could be implemented with 48-bit or 32-bit physical addresses (just that you would be limited to 248 or 232 words of memory).

update: if you really want to know, you have to look at the datasheet of each processor in question. E.g. Intel Core 2 Duo -- section 4.2 of the datasheet talks about the signals -- the address bus is 36-bits wide (but is really 33 signal lines; the data width is 64-bit = 8 bytes so the other 3 lines are probably unnecessary with proper data alignment)

Well, I'm just a regular PC user & programmer. Its just hard to believe for me that 32-bit addressing ie.. 4GB (2GB/3GB to be more correct) address space per process is a limit. If you really encountered this limit. Please give me example.

Two words: memory-mapped files.




回答2:


None of these answers are right, the reason that OSs don't use the full 64-bits is because the page tables would be far larger (64-bit is already up to 3 levels of page tables), and there's no reason to pay the extra indirection needed, 48 bits is enough. 48-bits is also convenient because you get some extra bits to store flags in (pointer tagging)




回答3:


No current x86-64 design uses more than 48 bits for this -- so it's a convenient number to pick, and it's automatically the same limit on Windows, too.




回答4:


Its just hard to believe for me that 32-bit addressing ie.. 4GB (2GB/3GB to be more correct) address space per process is a limit. If you really encountered this limit. Please give me example.

It's more efficient (quicker) to get data from RAM than to get it from disk.

The speed of SQL server depends partly on how much data (e.g. how many of its index and data pages) it's able to keep in RAM instead of on disk.

So, SQL databases (for example) may be faster on machines with more than 4GB of RAM.

The same is true for other types of server (e.g. file servers, HTTP proxies, etc.), which can be faster if they can have larger RAM caches.




回答5:


I think the simplest answer is - moore's law.

Moore's law basically says that ICs halve in cost every 18 months. There are some ways of interpreting this: The amount of memory installed in a PC tends to double every 18 months. The effective speed doubles (at least if you take the cores * the MHz rather than just the MHz).

Anyway, weve just really run out of 32bit address space, so a jump from 32 - 48 means that, on the hardware side, we've allocated expansion space for about 16 iterations of Moore's law - which works out to about 20 years.

Im pretty sure that while some PCs might be pushed to the 10 year mark, 20 years of expansion headroom seems a decent tradeoff: Computers in 20 years time are going to be different - they won't be using the same CPUs and RAM busses, just as they were different 20 years ago. Designing more than 20 years worth of headroom into an interface is just silly over engineering that never going to see use anyway.

And its not so short that existing hardware runs a real risk of being obsoleted too soon.




回答6:


Its just hard to believe for me that 32-bit addressing ie.. 4GB (2GB/3GB to be more correct) address space per process is a limit. If you really encountered this limit. Please give me example.

It doesn't exist any more (except on some old employees personal machines) but I worked on a suite of software called RealiMation back in the late 1990s/early 2000s. It was a real time 3D engine for visualisation and simulation. One of our customers regularly created highly detailed models that hit the 2GB memory limit. We would load textures on the fly as and when needed and had to add code to check for memory allocation failure so we could continue displaying the model, albeit untextured.




回答7:


From a hardware prespective, another consideration is alignment.

Once you need a data type of more than 4 bytes, say 6, you need to put them on 8-byte boundries to retrieve them in a single instruction. If you don't align you need to do bit masking and shifting, and add checks for this in the (assembly) code.

Many people were annoyed at the switch to 64-bit that their programs consumed so much more memory. They would have wanted 48-bit pointers, and if the restrictions on alignment weren't there the CPU makers probably would have made a 48-bit architecture.

Note that if you are so starved for memory that you want your pointers to be 6 bytes there are ways to do that. But there is a penalty to execution time.



来源:https://stackoverflow.com/questions/3219562/why-cant-os-use-entire-64-bits-for-addressing-why-only-the-48-bits

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!