ulimit

Dockerfile privileged flag for Docker container (Needed because of Apache error ulimit ) AWS

独自空忆成欢 提交于 2020-01-06 21:00:25
问题 I would like to start a container with privileges. Manually I can do that directly by typing: sudo docker run -privileged name/image But how can I generated a container from a Dockerfile with privileges, is there any command to do that in the dockerfile? In my case I am doing a deployment in amazon, in case it can not be done from a Dockerfile can it be done from the Dockerrun.aws.json? PS. To give some context to the question, I need privileges in the docker container to be able to change

increase ulimit for # of file descriptors

我的梦境 提交于 2019-12-31 00:59:10
问题 As normaluser : $ ulimit -n 4096 -bash: ulimit: open files: cannot modify limit: Operation not permitted As root it works as desired - but then it won't affect normaluser . How to get out of this catch 22? I'll need this to be persistent. 回答1: You may want to look at /etc/security/limits.conf 回答2: To change file descriptor limit as normal user before running any applicaiton. I use this small utility fdlimit which will increase file descriptor limit using "setrlimit()" system call before

Nodejs connect EMFILE - How to reuse connections?

橙三吉。 提交于 2019-12-25 07:04:20
问题 I am implementing NodeJS based scripts for communicating to couchbase and another service. It is a long running script and after a while I get "connect EMFILE" for the service. My code sample is given below: function createContainer(chunkName,recordingID,chunkData) { var swiftHTTPPath='http://'+swiftIPAddr+'/swift/v1/'+recordingID; var path = '/swift/v1/'+recordingID; var swiftOptions = { hostname : swiftIPAddr, port : swiftPort, path : path, method : 'PUT', }; http.get(swiftHTTPPath,

Proc crashes even if it allocates less memory than limited by ulimit

谁说胖子不能爱 提交于 2019-12-25 04:39:21
问题 I have set stack size to 2000Kb by ulimit -s 2000 and ulimit -Ss 2000 for hard limit. And in the below program i have allocated appox 2040000(510000 x 4) bytes which is less than i limited i.e,. 2048000(2000*4)bytes but i see that my program crashes! Can anybody suggest why this happens. #include <stdio.h> #include <malloc.h> int main() { int a[510000] = {0}; a[510000] = 1; printf("%d", a[510000]); fflush(stdout); sleep(70); } EDIT 1: Crash is not because of the array index out of bound as i

Update failed of file descriptor limit

这一生的挚爱 提交于 2019-12-25 02:53:47
问题 I have a server with Debian wheezy x64, I have problem with asterisk server "Try increasing max file descriptors with ulimit -n", I try to change the file descriptor limit as follows: # su - asterisk $ ulimit -Hn 4096 $ ulimit -Sn 1024 $ exit # vi /etc/security/limits.conf I added in the end of the file: .... asterisk soft nofile 65535 asterisk hard nofile 65535 # End of file And when I try to test: # su - asterisk $ ulimit -Hn 4096 $ ulimit -Sn 1024 $ am I miss somethings? (I rebooted the

Opening millions of numpy.memmaps in python

白昼怎懂夜的黑 提交于 2019-12-24 07:34:16
问题 I have a database composed of millions of training examples. Each is saved as its own numpy.memmap . (Yes, yes, I know, but they're of irregular sizes. I probably will modify my design to put like-size examples together in one memmap and hide that fact from the user.) Trying to open this database causes me to run in to the system NOFILES limits, but I've solved that part. Now I'm running in to OSError: [Errno 12] Cannot allocate memory after about 64865 memmaps are created, and executing most

Limit CPU time of process group

柔情痞子 提交于 2019-12-23 01:54:17
问题 Is there a way to limit the absolute CPU time (in CPU seconds) spend in a process group? ulimit -t 10; ./my-process looks like a good option but if my-process forks then each process in the process group gets its own limit. The whole process group can use an arbitrary amount of time by forking every 9 seconds. The accepted answer on a similar question is to use cgroups but doesn't explain how. However, there are other answers (Limit total CPU usage with cgroups) saying that this is not

docker run --ulimit cpu=10 does not kill java process after timeout

我怕爱的太早我们不能终老 提交于 2019-12-22 19:00:11
问题 I want to make sure the process gets killed after 10 seconds of CPU time. Docker run command accepts the flag --ulimit cpu=10 that is supposed to do that. However when I run java command using this, the ulimit setting is ignored. The java process with infinite loop continues even after 10s (actually for minutes until I kill it) Here is the command I used to test. docker run --rm -i -v /usr/local/src:/classes --ulimit cpu=10 java:8 \ java -cp /classes/ InfiniteLoop Instead of invoking java

Node.js fs.open() hangs after trying to open more than 4 named pipes (FIFOs)

依然范特西╮ 提交于 2019-12-22 18:50:36
问题 I have a node.js process that needs to read from multiple named pipes fed by different other processes as an IPC method. I realized after opening and creating read streams from more than four fifos, that fs seems to no longer be able to open fifos and just hangs there. It seems that this number is a bit low, considering that it is possible to open thousands of files concurrently without trouble (for instance by replacing mkfifo by touch in the following script). I tested with node.js v10.1.0

Supervisord and ulimit to java app

Deadly 提交于 2019-12-22 11:17:23
问题 I am using supervisord to start my java app. The application is working OK, but my ulimit nofiles is not set. I could do it in one machine, using debian. but there is a problem on the second machine that this configuration is not working. Basically, I start my app with a script: #!/bin/sh iscsiJar="/mnt/cache/jscsi/udrive.jar" ulimit -SHn 32768 # função para iniciar a aplicação java -XX:MaxHeapFreeRatio=70 -Xmx2048M -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=dump.hprof -jar $iscsiJar