问题
Recently I've been noticing an increase in the size of the core dumps generated by my application. Initially, they were just around 5MB in size and contained around 5 stack frames, and now I have core dumps of > 2GBs and the information contained within them are no different from the smaller dumps.
Is there any way I can control the size of core dumps generated? Shouldn't they be at least smaller than the application binary itself?
Binaries are compiled in this way:
- Compiled in release mode with debug symbols (ie, -g compiler option in GCC).
- Debug symbols are copied onto a
separate file and stripped from the
binary. - A GNU debug symbols link is added to the binary.
At the beginning of the application, there's a call to setrlimit
which sets the core limit to infinity -- Is this the problem?
回答1:
Yes - don't allocate so much memory :-)
The core dump contains the full image of your application's address space, including code, stack and heap (malloc'd objects etc.)
If your core dumps are >2GB, that implies that at some point you allocated that much memory.
You can use setrlimit to set a lower limit on core dump size, at the risk of ending up with a core dump that you can't decode (because it's incomplete).
回答2:
Yes, setrlimit is why you're getting large core files. You can set the limit on the core size in most shells, e.g. in bash you can do ulimit -c 5000000
. Your setrlimit call will override that, however.
/etc/security/limits.conf can be used to set upper bounds on the core size as well.
来源:https://stackoverflow.com/questions/2762879/linux-core-dumps-are-too-large