ulimit

关于coredump文件生成与查看

独自空忆成欢 提交于 2019-12-21 23:26:47
【推荐】2019 Java 开发者跳槽指南.pdf(吐血整理) >>> 博客园闪存首页新随笔联系管理订阅随笔- 711 文章- 0 评论- 230 Linux core 文件介绍 1. core文件的简单介绍 在一个程序崩溃时,它一般会在指定目录下生成一个core文件。core文件仅仅是一个内存映象(同时加上调试信息),主要是用来调试的。 2. 开启或关闭core文件的生成 用以下命令来阻止系统生成core文件: ulimit -c 0 下面的命令可以检查生成core文件的选项是否打开: ulimit -a 该命令将显示所有的用户定制,其中选项-a代表“all”。 也可以修改系统文件来调整core选项 在/etc/profile通常会有这样一句话来禁止产生core文件,通常这种设置是合理的: # No core files by default ulimit -S -c 0 > /dev/null 2>&1 但是在开发过程中有时为了调试问题,还是需要在特定的用户环境下打开core文件产生的设置 在用户的~/.bash_profile里加上ulimit -c unlimited来让特定的用户可以产生core文件 如果ulimit -c 0 则也是禁止产生core文件,而ulimit -c 1024则限制产生的core文件的大小不能超过1024kb 3. 设置Core

ulimit -t under ubuntu

断了今生、忘了曾经 提交于 2019-12-21 09:14:16
问题 I am running Ubuntu Linux (2.6.28-11-generic #42-Ubuntu SMP Fri Apr 17 01:57:59 UTC 2009 i686 GNU/Linux) and it seems that the command "ulimit -t" does not work properly. I ran: ulimit -t 1; myprogram where 'myprogram' is an endless loop. I expected the program to be interrupted after 1 second, but it did not stop. I tried the same thing on a Linux Fedora installation and it worked as expected. Is there some configuration that has to be set for it to work properly? -- tsf 回答1: As Tsf pointed

Supervisor open file limit won't change when using Chef

倾然丶 夕夏残阳落幕 提交于 2019-12-21 05:15:14
问题 I am modifying /etc/security/limits.conf on the machine, and then installing Supervisor in a Chef recipe. After the recipe run finishes, if I run cat /proc/<process id>/limits I see: Limit Soft Limit Hard Limit Units Max cpu time unlimited unlimited seconds Max open files 1024 4096 files If I log into the machine and run service supervisor restart , the max open files is then set correctly. However, if I run this command in the recipe (right after installing supervisor, at the very end of the

Finding hard and soft open file limits from within jvm in linux (ulimit -n and ulimit -Hn)

拈花ヽ惹草 提交于 2019-12-19 03:57:14
问题 I have a problem where I need to find out the hard and soft open file limits for the process in linux from within a java/groovy program. When I execute ulimit from the terminal it gives separate values for hard and soft open file limits. $ ulimit -n 1024 $ ulimit -Hn 4096 But, if I execute it in groovy, it ignores the soft limit and always returns hard limit value. groovy> ['bash', '-c', 'ulimit -n'].execute().text Result: 4096 groovy> ['bash', '-c', 'ulimit -Hn'].execute().text Result: 4096

How to increase the limit of “maximum open files” in C on Mac OS X

这一生的挚爱 提交于 2019-12-18 02:54:08
问题 The default limit for the max open files on Mac OS X is 256 (ulimit -n) and my application needs about 400 file handlers. I tried to change the limit with setrlimit() but even if the function executes correctly, i'm still limited to 256. Here is the test program I use: #include <stdio.h> #include <sys/resource.h> main() { struct rlimit rlp; FILE *fp[10000]; int i; getrlimit(RLIMIT_NOFILE, &rlp); printf("before %d %d\n", rlp.rlim_cur, rlp.rlim_max); rlp.rlim_cur = 10000; setrlimit(RLIMIT

Resident Set Size (RSS) limit has no effect

北战南征 提交于 2019-12-17 23:24:02
问题 The following problem occurs on a machine running Ubuntu 10.04 with the 2.6.32-22-generic kernel: Setting a limit for the Resident Set Size (RSS) of a process does not seem to have any effect. I currently set the limit in Python with the following code: import resource # (100, 100) is the (soft, hard) limit. ~100kb. resource.setrlimit(resource.RLIMIT_RSS, (100, 100)) memory_sink = ['a']*10000000 # this should fail The list, memory_sink, succeeds every time. When I check RSS usage with top, I

What does “ulimit -s unlimited” do?

余生颓废 提交于 2019-12-17 10:32:40
问题 There are understandably many related questions on stack allocation What and where are the stack and heap? Why is there a limit on the stack size? Size of stack and heap memory However on various *nix machines I can issue the bash command ulimit -s unlimited or the csh command set stacksize unlimited How does this change how programs are executed? Are there any impacts on program or system performance (e.g., why wouldn't this be the default)? In case more system details are relevant, I'm

How to limit memory of a OS X program? ulimit -v neither -m are working

风流意气都作罢 提交于 2019-12-17 06:35:21
问题 My programs run out of memory like half of the time I run them. Under Linux I can set a hard limit to the available memory using ulimit -v mem-in-kbytes. Actually, I use ulimit -S -v mem-in-kbytes, so I get a proper memory allocation problem in the program and I can abort. But... ulimit is not working in OSX 10.6. I've tried with -s and -m options, and they are not working. In 2008 there was some discussion about the same issue in MacRumors, but nobody proposed a good alternative. The should

how to set ulimit / file descriptor on docker container the image tag is phusion/baseimage-docker

蓝咒 提交于 2019-12-17 06:34:30
问题 I need to set the file descriptor limit correctly on the docker container I connect to container with ssh (https://github.com/phusion/baseimage-docker) Already tried: edit limits.conf the container ignore this file upstart procedure found at https://coderwall.com/p/myodcq but this docker image has different kind of init process. (runit) I tried to modify configuration of pam library in /etc/pam.d try to enabled pam for ssh in sshd_config The output it always the same. bash: ulimit: open files

How do I get a core dump on OS X Lion?

拥有回忆 提交于 2019-12-11 07:59:58
问题 I am working on a PostgreSQL extension in C that segfaults, so I want to look at the core dump file on my OS X Lion box. However, there are no core files in /cores or anywhere else that I can find. It appears that they are enabled in the system but are limited to a size of 0: > sysctl kern.coredump kern.coredump: 1 > ulimit -c 0 I tried setting ulimit -c unlimited in the shell session I'm using to start and stop PostgreSQL, and it seems to stick: > ulimit -c unlimited And yet no matter what I