Core dump file is not generated

谁都会走 提交于 2019-11-27 03:37:12

Make sure your current directory (at the time of crash -- server may change directories) is writable. If the server calls setuid, the directory has to be writable by that user.

Also check /proc/sys/kernel/core_pattern. That may redirect core dumps to another directory, and that directory must be writable. More info here.

This link contains a good checklist why core dumps are not generated:

  • The core would have been larger than the current limit.
  • You don't have the necessary permissions to dump core (directory and file). Notice that core dumps are placed in the dumping process' current directory which could be different from the parent process.
  • Verify that the file system is writeable and have sufficient free space.
  • If a sub directory named core exist in the working directory no core will be dumped.
  • If a file named core already exist but has multiple hard links the kernel will not dump core.
  • Verify the permissions on the executable, if the executable has the suid or sgid bit enabled core dumps will by default be disabled. The same will be the case if you have execute permissions but no read permissions on the file.
  • Verify that the process has not changed working directory, core size limit, or dumpable flag.
  • Some kernel versions cannot dump processes with shared address space (AKA threads). Newer kernel versions can dump such processes but will append the pid to the file name.
  • The executable could be in a non-standard format not supporting core dumps. Each executable format must implement a core dump routine.
  • The segmentation fault could actually be a kernel Oops, check the system logs for any Oops messages.
  • The application called exit() instead of using the core dump handler.
kenorb

Check:

$ sysctl kernel.core_pattern

to see how your dumps are created (%e will be the process name, and %t will be the system time).

If you've Ubuntu, your dumps are created by apport in /var/crash, but in different format (edit the file to see it).

You can test it by:

sleep 10 &
killall -SIGSEGV sleep

If core dumping is successful, you will see “(core dumped)” after the segmentation fault indication.

Read more:

How to generate core dump file in Ubuntu


Ubuntu

Please read more at:

https://wiki.ubuntu.com/Apport

Remember if you are starting the server from a service, it will start a different bash session so the ulimit won't be effective there. Try to put this in your script itself:

ulimit -c unlimited

For the record, on Debian 9 Stretch (systemd), I had to install the package systemd-coredump. Afterwards, core dumps were generated in the folder /var/lib/systemd/coredump.

Furthermore, these coredumps are compressed in the lz4 format. To decompress, you can use the package liblz4-tool like this: lz4 -d FILE.

To be able to debug the decompressed coredump using gdb, I also had to rename the utterly long filename into something shorter...

Also, check to make sure you have enough disk space on /var/core or wherever your core dumps get written. If the partition is almos full or at 100% disk usage then that would be the problem. My core dumps average a few gigs so you should be sure to have at least 5-10 gig available on the partition.

The answers given here cover pretty well most scenarios for which core dump is not created. However, in my instance, none of these applied. I'm posting this answer as an addition to the other answers.

If your core file is not being created for whatever reason, I recommend looking at the /var/log/messages. There might be a hint in there to why the core file is not created. In my case there was a line stating the root cause:

Executable '/path/to/executable' doesn't belong to any package

To work around this issue edit /etc/abrt/abrt-action-save-package-data.conf and change ProcessUnpackaged from 'no' to 'yes'.

ProcessUnpackaged = yes

This setting specifies whether to create core for binaries not installed with package manager.

If you call daemon() and then daemonize a process, by default the current working directory will change to /. So if your program is a daemon then you should be looking for a core in / directory and not in the directory of the binary.

If one is on a Linux distro (e.g. CentOS, Debian) then perhaps the most accessible way to find out about core files and related conditions is in the man page. Just run the following command from a terminal:

man 5 core

Although this isn't going to be a problem for the person who asked the question, because they ran the program that was to produce the core file in a script with the ulimit command, I'd like to document that the ulimit command is specific to the shell in which you run it (like environment variables). I spent way too much time running ulimit and sysctl and stuff in one shell, and the command that I wanted to dump core in the other shell, and wondering why the core file was not produced.

I will be adding it to my bashrc. The sysctl works for all processes once it is issued, but the ulimit only works for the shell in which it is issued (maybe also the descendents too) - but not for other shells that happen to be running.

Note: If you have written any crash handler yourself, then the core might not get generated. So search for code with something on the line:

signal(SIGSEGV, <handler> );

so the SIGSEGV will be handled by handler and you will not get the core dump.

Just in case someone else stumbles on this. I was running someone else's code - make sure they are not handling the signal, so they can gracefully exit. I commented out the handling, and got the core dump.

In centos,if you are not root account to generate core file: you must be set the account has a root privilege or login root account:

vim /etc/security/limits.conf

account soft core unlimited
account hard core unlimited

then if you in login shell with securecrt or other:

logout and then relogin

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!