memory-management

How does PAE (Physical Address Extension) enable an address space larger than 4GB?

时光怂恿深爱的人放手 提交于 2021-02-05 20:09:35
问题 An excerpt of Wikipedia's article on Physical Address Extension: x86 processor hardware-architecture is augmented with additional address lines used to select the additional memory, so physical address size increases from 32 bits to 36 bits. This, theoretically, increases maximum physical memory size from 4 GB to 64 GB. Along with an image explaining the mechanism: But I can't see how the address space is expanded from 4GB to 64GB. And 4 * 512 * 512 * 4K still equals 4GB, isn't it? 回答1: x86

Can the compiler optimize from heap to stack allocation?

断了今生、忘了曾经 提交于 2021-02-05 14:24:01
问题 As far as compiler optimizations go, is it legal and/or possible to change a heap allocation to a stack allocation? Or would that break the as-if rule? For example, say this is the original version of the code { Foo* f = new Foo(); f->do_something(); delete f; } Would a compiler be able to change this to the following { Foo f{}; f.do_something(); } I wouldn't think so, because that would have implications if the original version was relying on things like custom allocators. Does the standard

Can the compiler optimize from heap to stack allocation?

前提是你 提交于 2021-02-05 14:16:04
问题 As far as compiler optimizations go, is it legal and/or possible to change a heap allocation to a stack allocation? Or would that break the as-if rule? For example, say this is the original version of the code { Foo* f = new Foo(); f->do_something(); delete f; } Would a compiler be able to change this to the following { Foo f{}; f.do_something(); } I wouldn't think so, because that would have implications if the original version was relying on things like custom allocators. Does the standard

Custom heap/memory allocation ranges

坚强是说给别人听的谎言 提交于 2021-02-05 12:21:04
问题 I am writing a 64-bit application in C (with GCC) and NASM under Linux. Is there a way to specify, where I want my heap and stack to be located. Specifically, I want all my malloc'ed data to be anywhere in range 0x00000000-0x7FFFFFFF. This can be done at either compile time, linking or runtime, via C code or otherwise. It doesn't matter. If this is not possible, please explain, why. P.S. For those interested, what the heck I am doing: The program I am working on is written in C. During

Memory leak, does window have a safeguard to prevent max memory reached?

淺唱寂寞╮ 提交于 2021-02-05 08:22:27
问题 I have an application that uses a 3rd party API and I think they are having a memory leak issue. I wrote a small test program (below) to test this out, please note, both VMIListener and VMI are from the APIs in which I'm implementing their virtual interface methods. I don't have any memory leak behavior if I comment out the VMI vmi; under my VMITest class. With my limited knowledge in C++ I assume this is because the virtual VMI class does not have the virtual destructor. However, my question

Allocate writable memory in the .text section

跟風遠走 提交于 2021-02-05 08:12:10
问题 Is it possible to allocate memory in other sections of a NASM program, besides .data and .bss ? Say I want to write to a location in .text section and receive Segmentation Fault I'm interested in ways to avoid this and access memory legally. I'm running Ubuntu Linux 回答1: If you want to allocate memory at runtime , reserve some space on the stack with sub rsp, 4096 or something. Or run an mmap system call or call malloc from libc, if you linked against libc. If you want to test shellcode /

Why deallocating heap memory is much slower than allocating it?

两盒软妹~` 提交于 2021-02-05 04:55:46
问题 This is an empirical assumption (that allocating is faster then de-allocating). This is also one of the reason, i guess, why heap based storages (like STL containers or else) choose to not return currently unused memory to the system (that is why shrink-to-fit idiom was born). And we shouldn't confuse, of course, ' heap ' memory with the ' heap '-like data structures. So why de-allocation is slower ? Is it Windows -specific (i see it on Win 8.1 ) or OS independent? Is there some C++ specific

Why deallocating heap memory is much slower than allocating it?

倖福魔咒の 提交于 2021-02-05 04:53:05
问题 This is an empirical assumption (that allocating is faster then de-allocating). This is also one of the reason, i guess, why heap based storages (like STL containers or else) choose to not return currently unused memory to the system (that is why shrink-to-fit idiom was born). And we shouldn't confuse, of course, ' heap ' memory with the ' heap '-like data structures. So why de-allocation is slower ? Is it Windows -specific (i see it on Win 8.1 ) or OS independent? Is there some C++ specific

How are C# const members allocated in memory? [duplicate]

二次信任 提交于 2021-02-04 23:11:30
问题 This question already has an answer here : Memory allocation for const in C# (1 answer) Closed 6 years ago . The title of the question is self explanatory. I wonder if a member that is declared const is singleton for all instances of the class or each instance has it's own copy. I've read some questions about const but most of them refer to const variables inside a method. 回答1: Constants are usually something that can be evaluated compile time and compiler is likely to replace it with the

How are C# const members allocated in memory? [duplicate]

…衆ロ難τιáo~ 提交于 2021-02-04 23:10:46
问题 This question already has an answer here : Memory allocation for const in C# (1 answer) Closed 6 years ago . The title of the question is self explanatory. I wonder if a member that is declared const is singleton for all instances of the class or each instance has it's own copy. I've read some questions about const but most of them refer to const variables inside a method. 回答1: Constants are usually something that can be evaluated compile time and compiler is likely to replace it with the