fork

Perl forked socket server, stops accepting connections when a client disconnects

守給你的承諾、 提交于 2019-12-06 11:56:20
When using the following, but also when using similar code with IO::Socket::INET, I have problems with accepting new connections, once a client has disconnected. It seems the parent stops forking new children, until all previous children have ended/disconnected. The connection is accepted though. Does anyone have an idea what I'm doing wrong. #!/usr/bin/perl -w use Socket; use POSIX qw(:sys_wait_h); sub REAPER { 1 until (-1 == waitpid(-1, WNOHANG)); $SIG{CHLD} = \&REAPER; } $SIG{CHLD} = \&REAPER; $server_port=1977; socket(SERVER, PF_INET, SOCK_STREAM, getprotobyname('tcp')); setsockopt(SERVER,

Reading a file N lines at a time in ruby

*爱你&永不变心* 提交于 2019-12-06 11:15:54
问题 I have a large file (hundreds of megs) that consists of filenames, one per line. I need to loop through the list of filenames, and fork off a process for each filename. I want a maximum of 8 forked processes at a time and I don't want to read the whole filename list into RAM at once. I'm not even sure where to begin, can anyone help me out? 回答1: File.foreach("large_file").each_slice(8) do |eight_lines| # eight_lines is an array containing 8 lines. # at this point you can iterate over these

Pipe for multiple processes

强颜欢笑 提交于 2019-12-06 11:14:28
Currently working on some homework and having a hard time. The goal is to generate 100,000 numbers and add them all together by dividing the work into 10 processes (10,000 numbers each) I think I've figured out how to fork processes (hopefully), but using Pipe() to relay the subtotals from each child process is not working... the program below returns 44901 for each child process and 449010 for the running total. I'm struggling hard but I feel like this is something simple I should be able to understand. main() { int i; pid_t pid; int status = 0; int fd[2]; int runningTotal = 0; pipe(fd); int

GitLab 如何删除 Forked from

橙三吉。 提交于 2019-12-06 10:08:18
在 GitLab 中有 Forked from。 如何删除这个? 在 Settings 中选择 General 然后选择 Advanced 高级选项 然后单击移除 fork 关系的选项,你就可以将这个关系删除了。 请注意,当你删除这个 Fork 关系后,你将不能继续将你的修改 Merge 到你原来 fork 来的项目中了。 确认你需要删除这个关系。 访问前台页面,确认关系已经从项目中进行删除了。 来源: https://www.cnblogs.com/huyuchengus/p/11976149.html

Why is output of parent process blocked by child process?

有些话、适合烂在心里 提交于 2019-12-06 10:04:11
In my code below, I forked my process into a parent and child process. In the child process, I sent the c string argv[1] to the parent process to be printed. Then I made the child process sleep for 4 seconds before printing "This is the child process. Closing\n". In the parent process, I want the string from the child process to be printed to stdout as soon as I receive it from the child process. The problem arises here. Instead of immediately printing argv[1] in the parent process before the string "This is the child process. Closing\n" is printed 4 seconds later, what happens is this: $ g++

PHP 实现守护进程(Daemon)

試著忘記壹切 提交于 2019-12-06 09:55:24
守护进程(Daemon)是运行在后台的一种特殊进程。它独立于控制终端并且周期性地执行某种任务或等待处理某些发生的事件。守护进程是一种很有用的进程。php也可以实现守护进程的功能。 最近要开发一个 Agent,因为和 webserver 一起开发,所以 Agent也用PHP 开发。 功能需求: (1)需监听端口,接收控制器发来的任务消息; (2)根据任务消息,启动执行任务,并监控任务状态; (3)将任务运行结果反馈给控制器; 开始使用 workerman 框架,但是发现框架虽好,但也有很多限制。 1、基本概念 进程:每个进程都有一个父进程,子进程退出,父进程能得到子进程退出的状态。 进程组:每个进程都属于一个进程组,每个进程组都有一个进程组号,该号等于该进程组组长的PID 2、守护进程编程要点 1. 在后台运行。 为避免挂起控制终端将Daemon放入后台执行。方法是在进程中调用fork使父进程终止,让Daemon在子进程中后台执行。 if($pid=pcntl_fork()) exit(0);//是父进程,结束父进程,子进程继续 2. 脱离控制终端,登录会话和进程组 有必要先介绍一下Linux中的进程与控制终端,登录会话和进程组之间的关系:进程属于一个进程组,进程组号(GID)就是进程组长的进程号(PID)。登录会话可以包含多个进程组。这些进程组共享一个控制终端

C program to perform a pipe on three commands

…衆ロ難τιáo~ 提交于 2019-12-06 09:47:44
I have to write a program that will perform the same operation that du | sort | head in the command line would do, but I'm stuck, and my program is not working. The output right now is 112 . and the program doesn't terminate. Please help, I don't know what to do! int main(void) { int fd[2]; int fd1[2]; int pid; if (pipe(fd) == -1) { perror("Pipe"); exit(1); } switch (fork()) { case -1: perror("Fork"); exit(2); case 0: dup2(fd[1], STDOUT_FILENO); close(fd[0]); close(fd[1]); execl("/usr/bin/du", "du", (char *) 0); exit(3); } if (pipe(fd1) == -1) { perror("Pipe"); exit(1); } switch (fork()) {

Multiprocessing: why is a numpy array shared with the child processes, while a list is copied?

柔情痞子 提交于 2019-12-06 09:35:42
问题 I used this script (see code at the end) to assess whether a global object is shared or copied when the parent process is forked. Briefly, the script creates a global data object, and the child processes iterate over data . The script also monitors the memory usage to assess whether the object was copied in the child processes. Here are the results: data = np.ones((N,N)) . Operation in the child process: data.sum() . Result: data is shared (no copy) data = list(range(pow(10, 8))) . Operation

C synchronize processes using signal

冷暖自知 提交于 2019-12-06 09:16:47
Okay so I am trying to teach myself on how to do signalling, and I came across a hiccup and I can't figure out what I'm doing wrong. What is going on right now is: it is executing the parent then goes to child and then back to parent.. It's not doing what I want it to do which is execute the parent (which the user defines the amount of time it runs) then kills it then go to child and run itself at the same amount of time. #include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <signal.h> #include <sys/types.h> // for wait #include <sys/wait.h> // for wait void action(int); void

redis持久化的问题

扶醉桌前 提交于 2019-12-06 08:25:18
redis持久化的两种策略 RDB(redis database):在指定时间将内存中的快照(snapshot)写入到磁盘中进行持久化,恢复的时候直接将其读入到内存中。 怎么实现的: redis单独fork一个线程出来,进行持久化,不会打扰主线程的高速运行,如果进行大规模的数据的恢复,同时对数据的丢失的敏感性不高的话,可以是使用该方法,不过只能恢复最新的备份的数据,会把最新备份之后的数据全部丢失 fork:复制一个完全一摸一样的线程,什么内存空间啥的都一样的线程,可能拖慢主线程的运行 dump.rdb: 什么时候触发保存: 1、900s内出现1次key的改动 2、300s内出现10次key的改动 3、60s内出现10000次key的改动 什么时候会触发dump shatdown redis的时候,默认会去dump当前redis中的keys 禁用备份 在conf中写一个 save "" 如果要即时生效配置 set key value save命令,即时生成dump.rdb 一些配置 stop-writes-on-bgsave-error:如果设置为yes,那么如果备份错误的时候,会拒绝写入的,如果设置为no,那么不会拒绝这些东西 rdbacompression:yes(设置为yes,redis会压缩整个dump文件,会耗cpu的资源) rdbcchecknum:验证这个数据的准确性