virtual-memory

Why does high-memory not exist for 64-bit cpu?

妖精的绣舞 提交于 2019-12-07 18:08:15
问题 While I am trying to understand the high memory problem for 32-bit cpu and Linux, why is there no high-memory problem for 64-bit cpu? In particular, how is the division of virtual memory into kernel space and user space changed, so that the requirement of high memory doesn't exist for 64-bit cpu? Thanks. 回答1: A 32-bit system can only address 4GB of memory. In Linux this is divided into 3GB of user space and 1GB of kernel space. This 1GB is sometimes not enough so the kernel might need to map

linking and paging in the system without virtual memory support

萝らか妹 提交于 2019-12-07 15:00:12
问题 First of all, is virtual memory a hardware feature of the system, or is it implemented solely by OS? During link-time relocation, the linker assigns run-time addresses to each section and each symbol, in the generated executable Do those run-time addresses correspond to virtual addresses? What if the system for which the executable is generated, does not use virtual memory? Next, if virtual memory is not used, then the application's address space is limited to the physical address space

virtual v. physical memory in assessing C/C++ memory leak

故事扮演 提交于 2019-12-07 07:30:45
问题 I have a C++ application that I am trying to iron the memory leaks out of and I realized I don't fully understand the difference between virtual and physical memory. Results from top (so 16.8g = virtual, 111m = physical): 4406 um 20 0 16.8g 111m 4928 S 64.7 22.8 36:53.65 client My process holds 500 connections, one for each user, and at these numbers it means there is about 30 MB of virtual overhead for each user. Without going into the details of my application, the only way this could sound

How do I recover from EXC_BAD_ACCESS?

£可爱£侵袭症+ 提交于 2019-12-07 03:54:14
问题 I'm intentionally causing an EXC_BAD_ACCESS . By triggering a write to an NSObject in a read-only virtual memory page. Ideally, I'd like to catch EXC_BAD_ACCESS , mark the virtual memory page as read-write and have execution continue as it normally would have. Is this even possible? The code I've written to cause the EXC_BAD_ACCESS is below. WeakTargetObject.h (ARC) @interface WeakTargetObject : NSObject @property (nonatomic, weak) NSObject *target; @end WeakTargetObject.m (ARC)

What happens when the RAM is over in C#?

帅比萌擦擦* 提交于 2019-12-07 03:01:45
问题 I'm no computer expert, so let me try to put this question a little bit more specifically: I do some scientific computations, and the calculations sometimes requires a lot of memory to store their results. A few days ago, I'd an output file that took 4 GB in hard disk, but I have this amount of RAM. So: How does the CLR (or is it something else?) deals with the memory when the program you're running allocates more memory than that available in the computer? Does it create some swap in the HD?

I want an arbitrarily-large buffer in Linux/C/C++

爱⌒轻易说出口 提交于 2019-12-07 01:54:03
问题 Basically I want an arbitrarily large stack. I know that's not possible, but could I set aside a few terabytes of my virtual address space for it? I'd like to be able to start at the beginning and walk up the buffer as far as I need, and Linux can bring in pages from physical memory on an as-needed basis. Is something like that possible? Would it have the same performance as just malloc-ing a buffer? Would there be a way to signal to Linux that you're done with the memory once you pop the

How do I get the information shown in vmmap programatically?

二次信任 提交于 2019-12-06 09:01:52
As anyone who has watched the Mark Russovich talk "Mysteries of Memory Management Revealed" knows, the vmmap tool can show you things that count against your process limit (2GB on vanilla 32 bit windows) that few other tools seem to know about. I would like to be able to programmatically monitor my real total memory size (the one that's germane to the process limit) so I can at least log what's going on when I approach the process limit. Is there any information publicly available on how vmmap does this? ... Also, why is this information so darn hard to get?? Things I know about that (AFAIK)

Direct stack and heap access; Virtual- or hardware- level?

守給你的承諾、 提交于 2019-12-06 05:51:44
When I'm on SO I read a lot of comments guiding (Especially in C) "dynamic allocation allways goes to the heap, automatic allocation on the stack" But especially regarding to plain C I disaggree with that. As the ISO/IEC9899 doesn't even drop a word of heap or stack. It just mentions three storage duriations (static, automatic, and allocated) and advises how each of them has to be treat. What would give a compiler the option to do it even wise versa if it would like to. So my question is: Are the heap and the stack physical existing that (even if not in C) a standardized language can say "...

calculating page size and segment size

天大地大妈咪最大 提交于 2019-12-06 05:28:12
in a paged-segmented system we have the virtual address of 32 bits and 12 bits for the offset,11 bits for segment and 9 bits for page number.the how can we calculate the page size ,maximum segment size and maximum number of segment size? 12 bits are reserved for offset, so the page size is 2^12 = 4KB 9 bits are reserved for page number, so each segment can contain 2^9 = 512 pages Each segment can grow up to size of (# of pages) * (pages size), so maximum segment size is 512 * 4K = 2M For more information see http://www.cs.umass.edu/~weems/CmpSci535/Discussion21.html 来源: https://stackoverflow

Why does high-memory not exist for 64-bit cpu?

生来就可爱ヽ(ⅴ<●) 提交于 2019-12-06 02:42:06
While I am trying to understand the high memory problem for 32-bit cpu and Linux, why is there no high-memory problem for 64-bit cpu? In particular, how is the division of virtual memory into kernel space and user space changed, so that the requirement of high memory doesn't exist for 64-bit cpu? Thanks. A 32-bit system can only address 4GB of memory. In Linux this is divided into 3GB of user space and 1GB of kernel space. This 1GB is sometimes not enough so the kernel might need to map and unmap areas of memory which incurs a fairly significant performance penalty. The kernel space is the