virtual-memory

How does kernel know, which pages in the virtual address space correspond to a swapped out physical page frame?

感情迁移 提交于 2019-11-29 20:10:40
Consider the following situation: the kernel has exhausted the physical RAM and needs to swap out a page. It picks least recently used page frame and wants to swap its contents out to the disk and allocate that frame to another process. What bothers me is that this page frame was already mapped to, generally speaking, several (identical) pages of several processes. The kernel has to somehow find all of those processes and mark the page as swapped out. How does it carry that out? Thank you. EDIT: Illustrations to the question: Before the swapping processes 1 and 2 had a shared Page 1, which

Linux (Ubuntu), C language: Virtual to Physical Address Translation

眉间皱痕 提交于 2019-11-29 06:54:47
As the title suggests, I have a problem of obtaining the physical address from a virtual one. Let me explain: Given a variable declaration in process space, how can I derive it's physical address mapped by the OS? I've stumbled upon some sys calls /asm/io.h where the virt_to_phys() function is defined; however it seems this header is outdated and I can't find a work around. However; io.h is available at: /usr/src/linux-headers-2.6.35-28-generic/arch/x86/include/asm/ . My current kernel is 2.6.35-28 , but io.h isn't included in /usr/include/asm/ ? So, to reiterate: I need a way to get the

Get JVM to grow memory demand as needed up to size of VM limit?

生来就可爱ヽ(ⅴ<●) 提交于 2019-11-29 02:57:06
问题 We ship a Java application whose memory demand can vary quite a lot depending on the size of the data it is processing. If you don't set the max VM (virtual memory) size, quite often the JVM quits with an GC failure on big data. What we'd like to see, is the JVM requesting more memory, as GC fails to provide enough, until the total available VM is exhausted. e.g., start with 128Mb, and increase geometrically (or some other step) whenever the GC failed. The JVM ("Java") command line allows

What can my 32-bit app be doing that consumes gigabytes of physical RAM?

寵の児 提交于 2019-11-28 23:18:42
A co-worker mentioned to me a few months ago that one of our internal Delphi applications seems to be taking up 8 GB of RAM. I told him: That's not possible A 32-bit application only has a 32-bit virtual address space. Even if there was a memory leak, the most memory it could consume is 2 GB. After that allocations would fail (as there would be no empty space in the virtual address space). And in the case of a memory leak, the virtual pages will be swapped out to the pagefile, freeing up physical RAM. But he noted that Windows Resource Monitor indicated that less than 1 GB of RAM was available

How to get a struct page from any address in the Linux kernel

ぐ巨炮叔叔 提交于 2019-11-28 17:32:41
I have existing code that takes a list of struct page * and builds a descriptor table to share memory with a device. The upper layer of that code currently expects a buffer allocated with vmalloc or from user space, and uses vmalloc_to_page to obtain the corresponding struct page * . Now the upper layer needs to cope with all kinds of memory, not just memory obtained through vmalloc . This could be a buffer obtained with kmalloc , a pointer inside the stack of a kernel thread, or other cases that I'm not aware of. The only guarantee I have is that the caller of this upper layer must ensure

Physical or virtual addressing is used in processors x86/x86_64 for caching in the L1, L2 and L3?

非 Y 不嫁゛ 提交于 2019-11-28 16:00:29
问题 Which addressing is used in processors x86/x86_64 for caching in the L1, L2 and L3(LLC) - physical or virtual(using PT/PTE and TLB) and somehow does PAT(page attribute table) affect to it? And is there difference between the drivers(kernel-space) and applications(user-space) in this case? Short answer - Intel uses virtually indexed, physically tagged (VIPT) L1 caches: What will be used for data exchange between threads are executing on one Core with HT? L1 - Virtual addressing (in 8-way cache

How does kernel know, which pages in the virtual address space correspond to a swapped out physical page frame?

江枫思渺然 提交于 2019-11-28 15:59:21
问题 This question was migrated from Unix & Linux Stack Exchange because it can be answered on Stack Overflow. Migrated 6 years ago . Consider the following situation: the kernel has exhausted the physical RAM and needs to swap out a page. It picks least recently used page frame and wants to swap its contents out to the disk and allocate that frame to another process. What bothers me is that this page frame was already mapped to, generally speaking, several (identical) pages of several processes.

Difference between sequential write and random write

自古美人都是妖i 提交于 2019-11-28 15:48:35
What is the difference between sequential write and random write in case of :- 1)Disk based systems 2)SSD [Flash Device ] based systems When the application writes something and the information/data needs to be modified on the disk then how do we know whether it is a sequential write or a random write.As till this point a write cannot be distinguished as "sequential" or "random".The write is just buffered and then applied to the disk when we will flush the buffer. Please correct me if I am wrong. When people talk about sequential vs random writes to a file, they're generally drawing a

How does compiler lay out code in memory

冷暖自知 提交于 2019-11-28 11:19:43
Ok I have a bit of a noob student question. So I'm familiar with the fact that stacks contain subroutine calls, and heaps contain variable length data structures, and global static variables are assigned to permanant memory locations. But how does it all work on a less theoretical level? Does the compiler just assume it's got an entire memory region to itself from address 0 to address infinity? And then just start assigning stuff? And where does it layout the instructions, stack, and heap? At the top of the memory region, end of memory region? And how does this then work with virtual memory?

In what circumstances can large pages produce a speedup?

旧城冷巷雨未停 提交于 2019-11-28 09:02:50
Modern x86 CPUs have the ability to support larger page sizes than the legacy 4K (ie 2MB or 4MB), and there are OS facilities ( Linux , Windows ) to access this functionality. The Microsoft link above states large pages "increase the efficiency of the translation buffer, which can increase performance for frequently accessed memory". Which isn't very helpful in predicting whether large pages will improve any given situation. I'm interested in concrete, preferably quantified, examples of where moving some program logic (or a whole application) to use huge pages has resulted in some performance