Maximum memory that can be allocated dynamically and at compile time in c++

折月煮酒 提交于 2019-11-30 19:30:14

Memory pages aren't actually mapped to your program until you use them. All malloc does is reserve a range of the virtual address space. No physical RAM is mapped to those virtual pages until you try to read or write them.

Even when you allocate global or stack ("automatic") memory, there's no mapping of physical pages until you touch them.

Finally, sizeof() is evaluated at compile time, when the compiler has no idea what the OS will do later. So it will just tell you the expected size of the object.

You'll find that things will behave very differently if you try to memset the memory to 0 in each of your cases. Also, you might want to try calloc, which zeroes its memory.

Interesting.... one thing to note: when you write

char p[1000];

you allocate (well, reserve) 100 bytes on the stack.

When you write

char* p = malloc(100);

you allocate 100 bytes on the heap. Big difference. Now I don't know why the stack allocations are working - unless the value between the [] is being read as an int by the compiler and is thus wrapping around to allocate a much smaller block.

Most OSs don't allocate physical memory anyway, they give you pages from a virtual address space which remains unused (and therefore unallocated) until you use them, then the memory-manager unit of the CPU will nip in to give you the memory you asked for. Try writing to those bytes you allocated and see what happens.

Also, on windows at least, when you allocate a block of memory, you can only reserve the largest contiguous block the OS has available - so as the memory gets fragmented by repeated allocations, the largest side block you can malloc reduces. I don't know if Linux has this problem too.

There's a huge difference between these two programs:

program1.cpp

int main () {
   char p1[3072606208];
   char p2[4072606208];
   char p3[5072606208];

   std::cout << "Size of array p1 = " << sizeof(p1) << std::endl;
   std::cout << "Size of array p2 = " << sizeof(p2) << std::endl;
   std::cout << "Size of array p3 = " << sizeof(p3) << std::endl;
}

program2.cpp:

char p1[3072606208];
char p2[4072606208];
char p3[5072606208];

int main () {

   std::cout << "Size of array p1 = " << sizeof(p1) << std::endl;
   std::cout << "Size of array p2 = " << sizeof(p2) << std::endl;
   std::cout << "Size of array p3 = " << sizeof(p3) << std::endl;
}

The first allocates memory on the stack; it's going to get a segmentation fault due to stack overflow. The second doesn't do much at all. That memory doesn't quite exist yet. It's in the form of data segments that aren't touched. Let's modify the second program so that the data are touched:

char p1[3072606208];
char p2[4072606208];
char p3[5072606208];

int main () {

   p1[3072606207] = 0;
   p2[3072606207] = 0;
   p3[3072606207] = 0;

   std::cout << "Size of array p1 = " << sizeof(p1) << std::endl;
   std::cout << "Size of array p2 = " << sizeof(p2) << std::endl;
   std::cout << "Size of array p3 = " << sizeof(p3) << std::endl;
}

This doesn't allocate memory for p1, p2, or p3 on the heap or the stack. That memory lives in data segments. It's a part of the application itself. There's one big problem with this: On my machine, this version won't even link.

The first thing to note is that in modern computers is that processes do not get direct access to RAM (at the application level). Rather the OS will provide each process with a "virtual address space". The OS intercepts calls to access virtual memory reserves real memory as and when needed.

So when malloc or new says it's found enough memory for you, it just means that its found enough memory for you in the virtual address space. You can check this by running the following program with the memset line and with it commented out. (careful, this program uses a busy loop).

#include <iostream>
#include <new>
#include <string.h>

using namespace std;

int main(int argc, char** argv) {

    size_t bytes = 0x7FFFFFFF;
    size_t len = sizeof(char) * bytes;
    cout << "len = " << len << endl;

    char* arr = new char[len];
    cout << "done new char[len]" << endl;

    memset(arr, 0, len); // set all values in array to 0
    cout << "done setting values" << endl;

    while(1) {
        // stops program exiting immediately
        // press Ctrl-C to exit
    }

    return 0;
}

When memset is part of the program you will notice the memory used by your computer jumps massively, and without it you should barely notice any difference if any. When memset it called is accessed all the elements of the array, forcing the OS to make the space available in physical memory. Since the argument for new is a size_t (see here) then the maximum argument you can call it with is 2^32-1, though this isn't guaranteed to succeed (it certainly doesn't on my machine).

As for your stack allocations: David Hammem's answer says it better than I could. I am surprised you were able to compile those programs. Using the same setup as you (Ubuntu 12.04 and gcc 4.6) I get compile errors like:

test.cpp: In function ‘int main(int, char**)’:

test.cpp:14:6: error: size of variable ‘arr’ is too large

try the following code:

bool bExit = false;
unsigned int64 iAlloc = 0;

do{
   char *test = NULL;
   try{
        test = new char[1]();
        iAlloc++;
   }catch(bad_alloc){
   bExit = true;}
}while(!bExit);

char chBytes[130] = {0};
sprintf(&chBytes, "%d", iAlloc);
printf(&chBytes);

In one run don't open other programms, in the other run load a few large files in an application which use memory mapped files.

This may help you to understand.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!