c++ dynamic memory allocation limit

痞子三分冷 提交于 2019-12-06 00:24:41

There is no way to tell you that other than just trying to run the code.

The "bitness" just indicate the OS and the architecture that you are targeting, i also want to stress the fact that every OS that support C++ programs has is own implementation of the standard C++ library ( if you are using the std library ) and as coder you are just using headers and namespaces that belongs to the std library and you are relying on the C/C++ library that usually comes with the OS to actually run your code.

I also suggest to rely on the testing part and keep the use of the memory at the minimum, some OS also has some anti-overflow technology or something like that and so some OS can see your massive allocation as a threat for the system stability, an heavy use of the RAM also involves a big role for the memory controller like is normal in an X86 architecture, usually what you are trying to do is not a good thing and ends bad or ends up with a really specific machine and OS as your favorite target for this application that you are trying to create.

Finally, you are trying to write C code not C++ code!

malloc() is a function from the C world, also involves a direct memory management like direct allocation and de-allocation, you hardware also have to perform a lot, and i mean, a lot of indirections with ~800million structs.

I suggest a switch to a real C++ structure like the std vector ( better than the list for performance ) or just a switch to a language with its own garbage collector and without a direct memory management phase like C# or Java.

The answer to your question is no, also from a pragmatic point of view you will face a big problem about optimizing your code and probably, and i say probably, your life will be easier with a different language like C++ or C# or Java, but keep in mind that usually garbage collectors are memory-hungry, probably the best solution in your case will be the C++ with a little extra effort and testing phase from you.

Martin

The limit is approximately your free ram plus the space allowed for swapping to disk. For the record 800 million byte = 800 Mb so you might sit well on the safe side with small structs, even swapping might not be required (and should be avoided) Just try it out and see where it crashes ;-)

64 Bit: 2^64/2^30 = approx. 17* 10⁹ Gigabyte (for a byte addressable architecture, 1Gb=2^30 Bit) so no worries here

32 Bit: 2^32 = approx 4 Gigabyte so even here you could be on the safe side

Divide by two for signed values, still you have much room left at least on a 64bit system

For dynamic allocation the same restrictions as for static allocation apply. E.g. you are only restricted to the amount of memory available (which is restricted by the size of pointers). The main difference between 32 bit and 64 bit systems is the size of pointers, on a 32 Bit system you are restricted to 32 bit pointers, e.g. 4294967296 bytes (4GB) can be accessed. The system reserves some of it so in the end it is about 2,5 GB. On a 64 Bit system its way more, 2^64 = 16 exabyte, in practice its about 256 terabyte to 4 petabyte. Way more than you will need. If you don't have enough memory (and not enough swap space) it might crash though.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!