ulimit

【转】在Linux中打开了太多文件(Too many open files)的解决方法

拟墨画扇 提交于 2019-12-01 04:10:23
回答一 文件系统允许打开的最大文件句柄数 [root@lxadmin nginx]# cat /proc/sys/fs/file-max 8192 每个进程能打开的最大文件句柄数 [root@lxadmin nginx]# ulimit -n 1024 可以在启动某个进程前,使用[root@lxadmin nginx]# ulimit -n 8192调整一下 如果需要永久调整文件系统允许打开的文件句柄数, 在/etc/sysctl.conf末尾添加fs.file-max=xxx 回答二 Linux内核有时会报告“Too many open files”,起因是file-max默认值(如8096)太小。要解决这个问题,可以root身份执行下列命令 # echo "65536" > /proc/sys/fs/file-max # 适用于2.2和2.4版内核 # echo "131072" > /proc/sys/fs/inode-max # 仅适用于2.2版内核d 或将它们加入/etc/rcS.d/*下的init脚本。 回答三 办法是修改操作系统的打开文件数量限制,方法如下: 1. 在/etc/sysctl.conf末尾添加fs.file-max=xxx 2. 在/etc/security/limits.conf文件中设置每个进程能最大打开的文件句柄数 : * - nofile

How do I close the files from tempfile.mkstemp?

风格不统一 提交于 2019-12-01 02:44:18
On my machine Linux machine ulimit -n gives 1024 . This code: from tempfile import mkstemp for n in xrange(1024 + 1): f, path = mkstemp() fails at the last line loop with: Traceback (most recent call last): File "utest.py", line 4, in <module> File "/usr/lib/python2.7/tempfile.py", line 300, in mkstemp File "/usr/lib/python2.7/tempfile.py", line 235, in _mkstemp_inner OSError: [Errno 24] Too many open files: '/tmp/tmpc5W3CF' Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook ImportError: No

How do I set a ulimit from inside a Perl script that applies to its children?

我是研究僧i 提交于 2019-11-30 15:21:41
问题 I have a Perl script that does various installation steps to set up a development box for our company. It runs various shell scripts, some of which crash due to lower than required ulimit s (specifically, stack size -s in my case). Therefore, I'd like to set a ulimit that would apply to all scripts ( children ) started from within my main Perl one, but I am not sure how to achieve that - any attempts at calling ulimit from within the script only set it on that specific child shell, which

How do I set a ulimit from inside a Perl script that applies to its children?

泪湿孤枕 提交于 2019-11-30 13:47:38
I have a Perl script that does various installation steps to set up a development box for our company. It runs various shell scripts, some of which crash due to lower than required ulimit s (specifically, stack size -s in my case). Therefore, I'd like to set a ulimit that would apply to all scripts ( children ) started from within my main Perl one, but I am not sure how to achieve that - any attempts at calling ulimit from within the script only set it on that specific child shell, which immediately exits. I am aware that I can call ulimit before I run the Perl script or use /etc/security

How to limit memory usage within a python process

孤人 提交于 2019-11-30 11:03:32
I run Python 2.7 on a Linux machine with 16GB Ram and 64 bit OS. A python script I wrote can load too much data into memory, which slows the machine down to the point where I cannot even kill the process any more. While I can limit memory by calling: ulimit -v 12000000 in my shell before running the script, I'd like to include a limiting option in the script itself. Everywhere I looked, the resource module is cited as having the same power as ulimit . But calling: import resource _, hard = resource.getrlimit(resource.RLIMIT_DATA) resource.setrlimit(resource.RLIMIT_DATA, (12000, hard)) at the

How to limit memory usage within a python process

心已入冬 提交于 2019-11-29 16:33:00
问题 I run Python 2.7 on a Linux machine with 16GB Ram and 64 bit OS. A python script I wrote can load too much data into memory, which slows the machine down to the point where I cannot even kill the process any more. While I can limit memory by calling: ulimit -v 12000000 in my shell before running the script, I'd like to include a limiting option in the script itself. Everywhere I looked, the resource module is cited as having the same power as ulimit . But calling: import resource _, hard =

How to increase the limit of “maximum open files” in C on Mac OS X

ⅰ亾dé卋堺 提交于 2019-11-29 00:11:44
The default limit for the max open files on Mac OS X is 256 (ulimit -n) and my application needs about 400 file handlers. I tried to change the limit with setrlimit() but even if the function executes correctly, i'm still limited to 256. Here is the test program I use: #include <stdio.h> #include <sys/resource.h> main() { struct rlimit rlp; FILE *fp[10000]; int i; getrlimit(RLIMIT_NOFILE, &rlp); printf("before %d %d\n", rlp.rlim_cur, rlp.rlim_max); rlp.rlim_cur = 10000; setrlimit(RLIMIT_NOFILE, &rlp); getrlimit(RLIMIT_NOFILE, &rlp); printf("after %d %d\n", rlp.rlim_cur, rlp.rlim_max); for(i=0

Too many open files ( ulimit already changed )

 ̄綄美尐妖づ 提交于 2019-11-28 21:29:36
I'm working on a debian server with tomcat 7 and java 1.7. This is an application that recieves several TCP connections, each TCP connection is an open file by the java process. Looking at /proc/pid of java/fd I found that, sometimes, the number of open files exceeds 1024, when this happens, I find in catalina.out log the stacktrace _SocketException: Too many open files_ Everything I find about this error, people refer to the ulimit, I have already changed this thing and the error keeps happening. Here is the config: at /etc/security/limits.conf root soft nofile 8192 root hard nofile 8192 at

Resident Set Size (RSS) limit has no effect

自古美人都是妖i 提交于 2019-11-28 19:34:49
The following problem occurs on a machine running Ubuntu 10.04 with the 2.6.32-22-generic kernel: Setting a limit for the Resident Set Size (RSS) of a process does not seem to have any effect. I currently set the limit in Python with the following code: import resource # (100, 100) is the (soft, hard) limit. ~100kb. resource.setrlimit(resource.RLIMIT_RSS, (100, 100)) memory_sink = ['a']*10000000 # this should fail The list, memory_sink, succeeds every time. When I check RSS usage with top, I can easily get the process to use 1gb of RAM, which means that the limit is not working. Do RSS limits

Python: ulimit and nice for subprocess.call / subprocess.Popen?

て烟熏妆下的殇ゞ 提交于 2019-11-28 15:39:47
I need to limit the amount of time and cpu taken by external command line apps I spawn from a python process using subprocess.call , mainly because sometimes the spawned process gets stuck and pins the cpu at 99%. nice and ulimit seem like reasonable ways to do this, but I'm not sure how they'd interact with subprocess. The limits look something like: Kill the process if it's taking more than 60 seconds Limit it to 20% of cpu I want to apply the resource limiting to the subprocess, not to the python process that's spawning the subprocesses. Is there a way to apply nice and ulimit to the