virtual-memory

In what circumstances can large pages produce a speedup?

ⅰ亾dé卋堺 提交于 2019-11-27 02:40:37
问题 Modern x86 CPUs have the ability to support larger page sizes than the legacy 4K (ie 2MB or 4MB), and there are OS facilities (Linux, Windows) to access this functionality. The Microsoft link above states large pages "increase the efficiency of the translation buffer, which can increase performance for frequently accessed memory". Which isn't very helpful in predicting whether large pages will improve any given situation. I'm interested in concrete, preferably quantified, examples of where

What are the differences between virtual memory and physical memory?

99封情书 提交于 2019-11-27 02:34:53
I am often confused with the concept of virtualization in operating systems. Considering RAM as the physical memory, why do we need the virtual memory for executing a process? Where does this virtual memory stand when the process (program) from the external hard drive is brought to the main memory (physical memory) for the execution. Who takes care of the virtual memory and what is the size of the virtual memory? Suppose if the size of the RAM is 4GB (i.e. 2^32-1 address spaces) what is the size of the virtual memory? Virtual memory is, among other things, an abstraction to give the programmer

How do x86 page tables work?

為{幸葍}努か 提交于 2019-11-27 00:23:42
问题 I'm familiar with the MIPS architecture, which is has a software-managed TLB. So how and where you (the operating system) wants to store the page tables and the page table entries is completely up to you. For example I did a project with a single inverted page table; I saw others using 2-level page tables per process. But what's the story with x86? From what I know the TLB is hardware-managed. Does x86 tell basically tell you, "Hey this is where the page table entries you're currently using

Why do x86-64 systems have only a 48 bit virtual address space?

走远了吗. 提交于 2019-11-26 23:55:16
In a book I read the following: 32-bit processors have 2^32 possible addresses, while current 64-bit processors have a 48-bit address space My expectation was that if it's a 64-bit processor, the address space should also be 2^64. So I was wondering what is the reason for this limitation? Because that's all that's needed. 48 bits give you an address space of 256 terabyte. That's a lot. You're not going to see a system which needs more than that any time soon. So CPU manufacturers took a shortcut. They use an instruction set which allows a full 64-bit address space, but current CPUs just only

64 bit large mallocs

回眸只為那壹抹淺笑 提交于 2019-11-26 22:51:05
What are the reasons a malloc() would fail, especially in 64 bit? My specific problem is trying to malloc a huge 10GB chunk of RAM on a 64 bit system. The machine has 12GB of RAM, and 32 GB of swap. Yes, the malloc is extreme, but why would it be a problem? This is in Windows XP64 with both Intel and MSFT compilers. The malloc sometimes succeeds, sometimes doesn't, about 50%. 8GB mallocs always work, 20GB mallocs always fail. If a malloc fails, repeated requests won't work, unless I quit the process and start a fresh process again (which will then have the 50% shot at success). No other big

Unexpected page handling (also, VirtualLock = no op?)

青春壹個敷衍的年華 提交于 2019-11-26 18:27:05
问题 This morning I stumbled across a surprising number of page faults where I did not expect them. Yes, I probably should not worry, but it still strikes me odd, because in my understanding they should not happen. And, I'd like better if they didn't. The application (under WinXP Pro 32bit) reserves a larger section (1GB) of address space with VirtualAlloc(MEM_RESERVE) and later allocates moderately large blocks (20-50MB) of memory with VirtualAlloc(MEM_COMMIT) . This is done in a worker ahead of

Retrieving the memory map of its own process in OS X 10.5/10.6

流过昼夜 提交于 2019-11-26 16:18:25
问题 In Linux, the easiest way to look at a process' memory map is looking at /proc/PID/maps , giving something like this: 08048000-08056000 r-xp 00000000 03:0c 64593 /usr/sbin/gpm 08056000-08058000 rw-p 0000d000 03:0c 64593 /usr/sbin/gpm 08058000-0805b000 rwxp 00000000 00:00 0 40000000-40013000 r-xp 00000000 03:0c 4165 /lib/ld-2.2.4.so 40013000-40015000 rw-p 00012000 03:0c 4165 /lib/ld-2.2.4.so 4001f000-40135000 r-xp 00000000 03:0c 45494 /lib/libc-2.2.4.so 40135000-4013e000 rw-p 00115000 03:0c

Windows - Commit Size vs Virtual Size

人走茶凉 提交于 2019-11-26 15:39:42
问题 i would like to know the exact difference between Commit Size (visible in the Task Manager ) and Virtual Size (visible in SysInternals' Process Explorer ). The Virtual Size parameter in Process Explorer looks like a more accurate indicator of Total Virtual Memory usage by a process. However the Commit Size is always smaller than the Virtual Size and I guess it does not include all virtual memory in use by the process. I would like somebody to explain what is exactly included in these

Difference between physical/logical/virtual memory address

纵然是瞬间 提交于 2019-11-26 15:17:31
问题 I am a little confused about the terms physical/logical/virtual addresses in an Operating System(I use Linux- open SUSE) Here is what I understand: Physical Address- When the processor is in system mode, the address used by the processor is physical address. Logical Address- When the processor is in user mode, the address used is the logical address. these are anyways mapped to some physical address by adding a base register with the offset value.It in a way provides a sort of memory

Why does the stack address grow towards decreasing memory addresses?

夙愿已清 提交于 2019-11-26 12:11:13
问题 I read in text books that the stack grows by decreasing memory address; that is, from higher address to lower address. It may be a bad question, but I didn\'t get the concept right. Can you explain? 回答1: First, it's platform dependent. In some architectures, stack is allocated from the bottom of the address space and grows upwards. Assuming an architecture like x86 that stack grown downwards from the top of address space, the idea is pretty simple: =============== Highest Address (e.g. 0xFFFF