fork

Is the fork on mac (OSX-10.9.2) with the default compiler(gcc-4.2) any different from normal fork?

最后都变了- 提交于 2019-12-11 18:57:35
问题 I'm executing this program #include <stdio.h> #include <stdlib.h> #include <sys/types.h> #include <unistd.h> int main() { pid_t pid; pid = getpid(); printf("my pid is %d", pid); fork(); pid = getpid(); if(pid < 0) { printf("error creating child process"); exit(1); } if(pid > 0) { printf("\n my child's pid is %d \n", pid); exit(0); } printf("hello from child process, im still running"); return 0; } I expect the result to be : my pid is 5830 my child's pid is 5831 hello from child process, i'm

execute multiple processes from a master process

六月ゝ 毕业季﹏ 提交于 2019-12-11 18:33:13
问题 I want to create multiple processes from one master process. I know I want to use a function from the exec family, but it does not seem to be preforming in the way I intended it to. It seems that exec() is a blocking call, or maybe I am just using it wrong. Anyway, on to the code: const char* ROUTERLOCATION = "../../router"; int main(int argc, char** argv) { manager manager; vector<string> instructions = manager.readFile(argv[1]); ... //file gives me the number of proceses i want to spawn and

File pointers after returning from a forked child process

眉间皱痕 提交于 2019-12-11 18:26:06
问题 Is it normal, for a given file descriptor shared between a forked parent and child process, that the file position in the parent process remains the same after a child process reads from the same file descriptor? This is happening for me. Here's the setup: I am writing a C++ CGI program, so it reads http requests from stdin. When processing a multipart_form, I process stdin with an intermediary object (Multipart_Pull) that has a getc() method that detects the boundary strings and returns EOF

Perl run the same script for different directories at the same time

南笙酒味 提交于 2019-12-11 17:22:27
问题 I have a directory that contains other directories (the number of directories is arbitrary), like this: Main_directory_samples/ subdirectory_sample_1/ subdirectory_sample_2/ subdirectory_sample_3/ subdirectory_sample_4/ I have a script that receives as input one directory each time and it takes 1h to run (for each directory). To run the script I have the following code: opendir DIR, $maindirectory or die "Can't open directory!!"; while(my $dir = readdir DIR){ if($dir ne '.' && $dir ne '..'){

c fork,exec,getpid problem

旧时模样 提交于 2019-12-11 16:53:56
问题 I'm new to c language and Linux. I have a problem related to fork(),getpid()and exec()function. I wrote a c program using fork() call the code of my program is following" code: #include <stdio.h> #include <sys/types.h> #include <unistd.h> #include <stdlib.h> void fun() { printf("\n this is trial for child process"); } int main (int argc, char const *argv[]) { int i,status,pid,t; if(pid=fork()<0) { printf("\nfailed to create the process\n"); } if(pid=fork()==0) { printf("\n the child process

How to make sure child process finishes copying data into shared memory before join() is called?

↘锁芯ラ 提交于 2019-12-11 16:11:48
问题 I am using multiprocessing.Process to load some images and store them in a shared memory as explained here. The problem is, sometimes my code crashes due to a huge memory spike at completely random times. I just had an idea of what might be causing this: the process does not have had enough time to copy the contents of the image into the shared memory in RAM by the time join() . To test my hypothesis I added time.sleep(0.015) after doing join() on each of my processes and this has already

Correct usage of fork, wait, exit, etc

不羁的心 提交于 2019-12-11 15:50:17
问题 I have this problem to solve that I have no idea how to do it because there's only a few system calls we can use to solve it and I don't see how they are helpful for the situation. The Exercise: I have matrix with size [10][1000000] with integers and for each line I create a new process with fork(). The idea of each process is to go through all the numbers for that specific line and find a specific number then print a message about it. This was the first step of the problem and it's done. The

Timing out a forked process

给你一囗甜甜゛ 提交于 2019-12-11 15:16:12
问题 I am running a Monte carlo on Multiple processors, but it hangs up a lot. So I put together this perl code to kill the iteration that hangs up the monte carlo and go to the next iteration. But I get some errors, I have not figure out yet. I think it sleeps too long and it will delete the out.mt0 file before it will look for it. This is the code: my $pid = fork(); die "Could not fork\n" if not defined $pid; if ($pid == 0) { print "In child\n"; system("hspice -i mont_read.sp -o out -mt 4");

How can I share a database connection across a forked process in Perl?

被刻印的时光 ゝ 提交于 2019-12-11 15:05:32
问题 I made the following programs in Perl before: my $db = DBconnection with DB2 if ($pid = fork()) { #parent } else { #child $db->execute("SELECT ****"); exit; } wait(); $db->execute("SELECT ****"); I thought that it waited for the end of the child process to have wanted to do it and would operate it for DB by a pro-process. In addition, DB is not connected to the contents of the error. What's wrong? 回答1: There is a lot of stuff you must do to allow a child process to use its parent's DBI handle

How to redirect signal to child process from parent process?

℡╲_俬逩灬. 提交于 2019-12-11 14:54:35
问题 I am trying to understand processes in C. I currently want to create shell-like structure which - after pressing a shortcut like Ctrl + C or Ctrl + Z will kill all its subprocesses but will stay alive. My code looks like this: #include <ctype.h> #include <errno.h> #include <stdbool.h> #include <stdio.h> #include <readline/readline.h> #include <readline/history.h> #include <stdlib.h> #include <string.h> #include <sys/types.h> #include <signal.h> #include <sys/wait.h> #include <termios.h>