fork

How to use named semaphore from child

此生再无相见时 提交于 2019-12-10 23:37:58
问题 So basically I want to suspend a bit the child process after it's creation, just so the parent prepare some data for it in a shared memory. I'm trying to use a semaphore, as suggested here: How to share semaphores between processes using shared memory. Problem nr 1 : the child can't open the semaphore. Problem nr 2 : strerror returns an int, but man strerror clearly says it returns an char *. To avoid "what have you tried": sem = sem_open("/semaphore", O_CREAT, 0644, 0); for (i = 0; i < num;

Why can I not read from stdin in this forked process?

对着背影说爱祢 提交于 2019-12-10 23:36:40
问题 The following code prints nothing, but it should print out "a" repeatedly. The forked process blocks on the os.read(0, 1). The parent process is indeed writing to the stdin_master, but stdin_slave receives nothing. Any ideas? import os import pty import resource import select import signal import time stdin_master, stdin_slave = pty.openpty() stdout_master, stdout_slave = pty.openpty() stderr_master, stderr_slave = pty.openpty() pid = os.fork() # child process if pid == 0: os.setsid() os

How to read and write from subprocesses asynchronously?

社会主义新天地 提交于 2019-12-10 22:34:11
问题 I would like to open several subprocesses and read/write from their stdin/stdout when there is data available. First try: import subprocess, select, fcntl, os p1 = subprocess.Popen("some command", stdout=subprocess.PIPE) p2 = subprocess.Popen("another command", stdout=subprocess.PIPE) def make_nonblocking(fd): flags = fcntl.fcntl(fd, fcntl.F_GETFL) fcntl.fcntl(fd, fcntl.F_SETFL, flags | os.O_NONBLOCK) make_nonblocking(p1.stdout) make_nonblocking(p2.stdout) size = 10000 while True: inputready,

Is it safe to share a file handle between two processes in PHP?

偶尔善良 提交于 2019-12-10 20:44:43
问题 I've found similar questions here on Stack but I'm not sure whether they apply to PHP. I'd like to create child processes with pcntl_fork(). I want to write messages to a log file, from both the parent and the child processes. If I open a file handle in the parent, is it safe to write to the same handle from the childs? Note that I will only be appending to the file. I'm afraid of the race conditions that could happen, in particular if the two processes are executed on different cores: what

How come forked processes do not affect each other when there is a global pointer?

和自甴很熟 提交于 2019-12-10 20:18:05
问题 I know the fork() function creates a process which is identical to its parents, only differs by the PID it has. They have the same variables initially, and changes made to these variables do not affect each other . But what happens when a global pointer variable is shared? I have written some code and printed out the results. It appears that the parent and the child process have the pointer pointing to the same memory location, however changes made to these memory locations, i.e. *p = 1 in

perl - child process signaling parent

亡梦爱人 提交于 2019-12-10 18:54:59
问题 I have written the following piece of code to test signaling between child and parent. Ideally, when the child gives a SIGINT to parent the parent should come back in the new iteration and wait for user input. This I have observed in perl 5.8, but in perl 5.6.1(which I am asked to use) the parent is actually "killed". There is no next iteration. my $parent_pid = $$; $pid = fork(); if($pid == 0) { print "child started\n"; kill 2, $parent_pid; } else { while(1) { eval { $SIG{INT} = sub{die

Redirecting stdout to file after a fork()

余生颓废 提交于 2019-12-10 18:49:05
问题 I'm working on a simple shell, but right now I am just trying to understand redirection. I'm just hard coding an ls command and trying to write it to a file for now. Currently, the ls runs, and the output file is created, but the output still goes to stdout and the file is blank. I'm confused as to why. Thanks in advance. Here is my code: int main() { int ls_pid; /* The new process id for ls*/ char *const ls_params[] = {"/bin/ls", NULL}; /* for ls */ int file; /* file for writing */ /* Open

Updating multiple branches of a forked repository on Github

老子叫甜甜 提交于 2019-12-10 18:43:45
问题 I have a forked github repository (call it repo-O and call my fork repo-F) which contains about 8 branches. Several (100s) commits have been made to repo-O from other contributors, and on multiple branches of repo-O. I would now like to pull these changes into my forked repo (repo-F). I cant use the fork queue as there are about 3000 commits to pick through, and I would much rather do this from the command line. So.. I have cloned my repo, added repo-O as a remote (upstream) and fetched the

Pipes as stdin/stdout in process communication.

别说谁变了你拦得住时间么 提交于 2019-12-10 18:35:40
问题 I'm learning pipes and I have occured problem. I want my program to work as: grep [word to find] [file to search] | grep -i [without word] | wc -l It compiles and works with no errors, but it gives no output(at least not on stdout as i want it to do). What is strange, when i try to printf sth in last fork it's printing it on stdin. Im not changing stdout in this fork or in the parrent process so it seems weird to me. I'm trying to close unused pipes and flush stdout(is it still doing sth here

is it possible to get the R survey package's `svyby` function multicore= parameter working on windows?

痞子三分冷 提交于 2019-12-10 18:06:36
问题 being able to multithread on windows would be awesome, but perhaps this problem is harder than i had thought.. :( inside of survey:::svyby.default there is a a block that's either lapply or mclapply depending on multicore=TRUE and your operating system. windows users get forced into the lapply loop no matter what, and i was wondering if there's any way to go down the mclapply path instead.. speeding up the computation. i don't know too much about the innards of parallel processing, but i did