parallel-processing

Is it possible to have any dataflow block type send multiple intermediate results as a result of a single input?

青春壹個敷衍的年華 提交于 2021-02-05 08:23:06
问题 Is it possible to get TransformManyBlock s to send intermediate results as they are created to the next step instead if waiting for the entire IEnumerable<T> to be filled? All testing I've done shows that TransformManyBlock only sends a result to the next block when it is finished; the next block then reads those items one at a time. It seems like basic functionality but I can't find any examples of this anywhere. The use case is processing chunks of a file as they are read. In my case there

How to find ideal number of parallel processes to run with python multiprocessing?

╄→гoц情女王★ 提交于 2021-02-04 21:36:50
问题 Trying to find out the correct number of parallel processes to run with python multiprocessing. Scripts below are run on an 8-core, 32 GB (Ubuntu 18.04) machine. (There were only system processes and basic user processes running while the below was tested.) Tested multiprocessing.Pool and apply_async with the following: from multiprocessing import current_process, Pool, cpu_count from datetime import datetime import time num_processes = 1 # vary this print(f"Starting at {datetime.now()}")

How to execute 4 shell scripts in parallel, I can't use GNU parallel?

风流意气都作罢 提交于 2021-02-04 21:35:42
问题 I have 4 shell scripts dog.sh, bird.sh, cow.sh and fox.sh. Each of these files execute 4 wgets in parallel using xargs to fork a separate process. Now I want these scripts themselves to be executed in parallel. For some portability reason unknown to me I can't use GNU parallel. IS there a way I can do this with xargs or with any other tool. Also can I also ask what could the portability reason be? I'm a total newbie to shell scripting. Sorry if my question seems cryptic. Thanks in advance

How does OpenMP use the atomic instruction inside reduction clause?

筅森魡賤 提交于 2021-02-04 21:06:51
问题 How does OpenMP uses atomic instructions inside reduction constructor? Doesn't it rely on atomic instructions at all? For instance, is the variable sum in the code below accumulated with atomic '+' operator? #include <omp.h> #include <vector> using namespace std; int main() { int m = 1000000; vector<int> v(m); for (int i = 0; i < m; i++) v[i] = i; int sum = 0; #pragma omp parallel for reduction(+:sum) for (int i = 0; i < m; i++) sum += v[i]; } 回答1: How does OpenMP uses atomic instruction

How does OpenMP use the atomic instruction inside reduction clause?

十年热恋 提交于 2021-02-04 21:06:45
问题 How does OpenMP uses atomic instructions inside reduction constructor? Doesn't it rely on atomic instructions at all? For instance, is the variable sum in the code below accumulated with atomic '+' operator? #include <omp.h> #include <vector> using namespace std; int main() { int m = 1000000; vector<int> v(m); for (int i = 0; i < m; i++) v[i] = i; int sum = 0; #pragma omp parallel for reduction(+:sum) for (int i = 0; i < m; i++) sum += v[i]; } 回答1: How does OpenMP uses atomic instruction

I want to print the fibonacci series using two threads. Like 1st number should be printed by 1st thread and then 2nd number by 2nd thread and so on

情到浓时终转凉″ 提交于 2021-02-04 19:41:51
问题 I want fibonacci series to be printed by threads and the 1st number of the series should be printed by 1st thread then 2nd number by 2nd thread then 3rd by 1st thread and 4th by 2nd and so on. I tried this code by using arrays like printing the array elements using thread but I am not able to switch between the threads. class Fibonacci{ void printFibonacci() { int fibArray[] = new int[10]; int a = 0; int b = 1; fibArray[0] = a; fibArray[1] = b; int c; for(int i=2;i<10;i++) { c = a+b; fibArray

I want to print the fibonacci series using two threads. Like 1st number should be printed by 1st thread and then 2nd number by 2nd thread and so on

假装没事ソ 提交于 2021-02-04 19:41:07
问题 I want fibonacci series to be printed by threads and the 1st number of the series should be printed by 1st thread then 2nd number by 2nd thread then 3rd by 1st thread and 4th by 2nd and so on. I tried this code by using arrays like printing the array elements using thread but I am not able to switch between the threads. class Fibonacci{ void printFibonacci() { int fibArray[] = new int[10]; int a = 0; int b = 1; fibArray[0] = a; fibArray[1] = b; int c; for(int i=2;i<10;i++) { c = a+b; fibArray

Why should I use a reduction rather than an atomic variable?

拥有回忆 提交于 2021-02-04 07:28:51
问题 Assume we want to count something in an OpenMP loop. Compare the reduction int counter = 0; #pragma omp for reduction( + : counter ) for (...) { ... counter++; } with the atomic increment int counter = 0; #pragma omp for for (...) { ... #pragma omp atomic counter++ } The atomic access provides the result immediately, while a reduction only assumes its correct value at the end of the loop. For instance, reductions do not allow this: int t = counter; if (t % 1000 == 0) { printf ("%dk iterations

Negative speed up in Amdahl's law?

删除回忆录丶 提交于 2021-02-02 09:07:51
问题 Amdahl’s law states that a speed up of the entire system is an_old_time / a_new_time where the a_new_time can be represented as ( 1 - f ) + f / s’ , where f is the fraction of the system that is enhanced by some modification, and s’ is the amount by which that fraction of the system is enhanced. However, after solving this equation for s’ , it seems like there are many cases in which s’ is negative, which makes no physical sense. Taking the case that s = 2 (a 100% increase in the speed for

Thread - contention vs race

你离开我真会死。 提交于 2021-01-29 21:21:20
问题 I have seen the terms contention and race are used interchangeably when it comes to thread's state(at critical section). Are they same? 回答1: "Contention" usually refers to the situation where two or more threads need to lock the same lock. We say that the lock is "contested" or maybe, "heavily contested," if there is a significant probability of any thread being forced to wait when it tries to acquire the lock. "Race," "Race condition," and "Data race" are phrases whose meanings have changed