问题
I have a program that implements several heuristic search algorithms and several domains, designed to experimentally evaluate the various algorithms. The program is written in C++, built using the GNU toolchain, and run on a 64-bit Ubuntu system. When I run my experiments, I use bash's ulimit
command to limit the amount of virtual memory the process can use, so that my test system does not start swapping.
Certain algorithm/test instance combinations hit the memory limit I have defined. Most of the time, the program throws an std::bad_alloc exception, which is printed by the default handler, at which point the program terminates. Occasionally, rather than this happening, the program simply segfaults.
Why does my program occasionally segfault when out of memory, rather than reporting an unhandled std::bad_alloc and terminating?
回答1:
One reason might be that by default Linux overcommits memory. Requesting memory from the kernel appears to work alright, but later on when you actually start using the memory the kernel notices "Oh crap, I'm running out of memory", invokes the out-of-memory (OOM) killer which selects some victim process and kills it.
For a description of this behavior, see http://lwn.net/Articles/104185/
回答2:
It could be some code using no-throw new and not checking the return value.
Or some code could be catching the exception and not handling it or rethrowing it.
回答3:
What janneb said. In fact Linux by default never throws std::bad_alloc (or returns NULL from malloc()).
来源:https://stackoverflow.com/questions/2567683/why-does-my-program-occasionally-segfault-when-out-of-memory-rather-than-throwin