ulimit

Node.js fs.open() hangs after trying to open more than 4 named pipes (FIFOs)

旧街凉风 提交于 2019-12-06 10:57:41
I have a node.js process that needs to read from multiple named pipes fed by different other processes as an IPC method. I realized after opening and creating read streams from more than four fifos, that fs seems to no longer be able to open fifos and just hangs there. It seems that this number is a bit low, considering that it is possible to open thousands of files concurrently without trouble (for instance by replacing mkfifo by touch in the following script). I tested with node.js v10.1.0 on MacOS 10.13 and with node.js v8.9.3 on Ubuntu 16.04 with the same result. The faulty script And a

How to set the size of the C stack in R?

佐手、 提交于 2019-12-06 00:58:35
问题 I'm trying to use the spread() function from the tidyr package in R on a dataframe that has about three million observations. It's returning the following error message: Error : C stack usage 26498106 is too close to the limit When I run Cstack_info() , it tells me > Cstack_info() size current direction eval_depth 7969177 15272 1 2 Following the advice in the answer to this question, I've tried increasing the stack size by running ulimit -s 32768 in a terminal window and opening Rstudio from

Supervisord and ulimit to java app

蓝咒 提交于 2019-12-05 18:44:23
I am using supervisord to start my java app. The application is working OK, but my ulimit nofiles is not set. I could do it in one machine, using debian. but there is a problem on the second machine that this configuration is not working. Basically, I start my app with a script: #!/bin/sh iscsiJar="/mnt/cache/jscsi/udrive.jar" ulimit -SHn 32768 # função para iniciar a aplicação java -XX:MaxHeapFreeRatio=70 -Xmx2048M -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=dump.hprof -jar $iscsiJar But my command cat /proc/4171/limits keeps saying: Max open files 4096 4096 files Any hint? I already

文件描述符再述之 initscript 和 systemd

南笙酒味 提交于 2019-12-05 07:46:49
服务的 ulimit 设置无效 设定 文件描述符 /etc/security/limits.conf 内容: * soft nproc 10000 * hard nproc 10000 * soft nofile 4194304 * hard nofile 4194304 /etc/sysctl.conf 片段: fs.nr_open = 5242880 fs.file-max = 4194304 /etc/profile.d/ulimit.sh 内容: #!/bin/bash [ "$(id -u)" == "0" ] && ulimit -n 4194304 ulimit -u 80000 /etc/pam.d/login 下相关文件片段 session required pam_limits.so service 启动的服务, Runtime 文件描述符仍是 1024 ? :joy: grep 'open files' /proc/$(pgrep cron)/limits # Max open files 1024 4096 files SysV的系统设定(CentOS6/Debian7) initscript 增加一个 /etc/initscript 文件就可以改变所有 service 的 环境变量 了。 示例 # cat /etc/initscript ulimit -n

React Native + Jest EMFILE: too many open files error

感情迁移 提交于 2019-12-05 05:15:44
I am trying to run Jest tests, but I'm getting the following error: Error reading file: /Users/mike/dev/react/TestTest/node_modules/react-native/node_modules/yeoman-environment/node_modules/globby/node_modules/glob/node_modules/path-is-absolute/package.json /Users/mike/dev/react/TestTest/node_modules/jest-cli/node_modules/node-haste/lib/loader/ResourceLoader.js:88 throw err; ^ Error: EMFILE: too many open files, open '/Users/mike/dev/react/TestTest/node_modules/react-native/node_modules/yeoman-environment/node_modules/globby/node_modules/glob/node_modules/path-is-absolute/package.json' at

How to set the size of the C stack in R?

梦想的初衷 提交于 2019-12-04 06:14:55
I'm trying to use the spread() function from the tidyr package in R on a dataframe that has about three million observations. It's returning the following error message: Error : C stack usage 26498106 is too close to the limit When I run Cstack_info() , it tells me > Cstack_info() size current direction eval_depth 7969177 15272 1 2 Following the advice in the answer to this question, I've tried increasing the stack size by running ulimit -s 32768 in a terminal window and opening Rstudio from the terminal. When I try this, however, the output of Cstack_info() is unchanged, and when I run my

How do I close the files from tempfile.mkstemp?

淺唱寂寞╮ 提交于 2019-12-03 23:21:11
问题 On my machine Linux machine ulimit -n gives 1024 . This code: from tempfile import mkstemp for n in xrange(1024 + 1): f, path = mkstemp() fails at the last line loop with: Traceback (most recent call last): File "utest.py", line 4, in <module> File "/usr/lib/python2.7/tempfile.py", line 300, in mkstemp File "/usr/lib/python2.7/tempfile.py", line 235, in _mkstemp_inner OSError: [Errno 24] Too many open files: '/tmp/tmpc5W3CF' Error in sys.excepthook: Traceback (most recent call last): File "

Supervisor open file limit won't change when using Chef

独自空忆成欢 提交于 2019-12-03 16:53:08
I am modifying /etc/security/limits.conf on the machine, and then installing Supervisor in a Chef recipe. After the recipe run finishes, if I run cat /proc/<process id>/limits I see: Limit Soft Limit Hard Limit Units Max cpu time unlimited unlimited seconds Max open files 1024 4096 files If I log into the machine and run service supervisor restart , the max open files is then set correctly. However, if I run this command in the recipe (right after installing supervisor, at the very end of the recipe, anything) the limit does not change. It is not until I log in and manually run that command

Max open files per process

ぃ、小莉子 提交于 2019-12-03 13:54:33
问题 What is a maximum open files count in Mac OS X (10.6) per process? ulimit said than 256, sysctl said 10240, but my test program can create 9469 (under gdb), 10252 (without gdb) files.. 回答1: It is clear now. The ulimit command is build in shell. You can set the maxfiles using ulimit -n command for current shell (and every program which was started from this shell). 10252 files - it was my mistake.. it was 253 max open files when I start my test program from shell (253 + stdin + stdout + stderr

Why ulimit can't limit resident memory successfully and how?

☆樱花仙子☆ 提交于 2019-12-03 11:30:56
I start a new bash shell, and execute: ulimit -m 102400 ulimit -a " core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 20 file size (blocks, -f) unlimited pending signals (-i) 16382 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) 102400 open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) unlimited virtual memory (kbytes, -v) unlimited file locks (-x) unlimited " and then ,I execute compiling