parallel-processing

Implement sleep() in OpenCL C [duplicate]

点点圈 提交于 2020-01-08 04:18:16
问题 This question already has an answer here : Calculate run time of kernel code in OpenCL C (1 answer) Closed 4 years ago . I want to measure the performance of different devices viz CPU and GPUs. This is my kernel code: __kernel void dataParallel(__global int* A) { sleep(10); A[0]=2; A[1]=3; A[2]=5; int pnp;//pnp=probable next prime int pprime;//previous prime int i,j; for(i=3;i<10;i++) { j=0; pprime=A[i-1]; pnp=pprime+2; while((j<i) && A[j]<=sqrt((float)pnp)) { if(pnp%A[j]==0) { pnp+=2; j=0; }

Equal loading for parallel task distribution

守給你的承諾、 提交于 2020-01-07 08:19:08
问题 I have a large number of independent tasks I would like to run, and I would like to distribute them on a parallel system such that each processor does the same amount of work, and maximizes my efficiency. I would like to know if there is a general approach to finding a solution to this problem, or possibly just a good solution to my exact problem. I have T=150 tasks I would like to run, and the time each task will take is t=T. That is, task1 takes 1 one unit of time, task2 takes 2 units of

Queue using several processes to launch bash jobs

微笑、不失礼 提交于 2020-01-07 07:59:07
问题 I need to run many (hundreds) commands in shell, but I only want to have a maximum of 4 processes running (from the queue) at once. Each process will last several hours. When a process finishes I want the next command to be "popped" from the queue and executed. I also want to be able to add more process after the beginning, and it will be great if I could remove some jobs from the queue, or at least empty the queue. I have seen solutions using makefile, but this only work if I have all my

Looping files in bash

纵饮孤独 提交于 2020-01-07 07:46:11
问题 I want to loop over these kind of files, where the the files with same Sample_ID have to be used together Sample_51770BL1_R1.fastq.gz Sample_51770BL1_R2.fastq.gz Sample_52412_R1.fastq.gz Sample_52412_R2.fastq.gz e.g. Sample_51770BL1_R1.fastq.gz and Sample_51770BL1_R2.fastq.gz are used together in one command to create an output. Similarly, Sample_52412_R1.fastq.gz and Sample_52412_R2.fastq.gz are used together to create output. I want to write a for loop in bash to iterate over and create

Unable to implement MPI_Intercomm_create

ⅰ亾dé卋堺 提交于 2020-01-07 07:44:06
问题 I am trying to implement an MPI_intercomm in Fortran between 2 inter communicators, one which has first 2 process and the other having the rest. I need to perform send, recv operations between the newly created communicators. The code: program hello include 'mpif.h' integer tag,ierr,rank,numtasks,color,new_comm,inter1,inter2 tag = 22 call MPI_Init(ierr) call MPI_COMM_RANK(MPI_COMM_WORLD,rank,ierr) call MPI_COMM_SIZE(MPI_COMM_WORLD,numtasks,ierr) if (rank < 2) then color = 0 else color = 1 end

How to parallelize this piece of code?

帅比萌擦擦* 提交于 2020-01-07 05:58:09
问题 I've been browsing for some time but couldn't find any constructive answer that I could comprehend. How should I paralellize the following code: import random import math import numpy as np import sys import multiprocessing boot = 20#number of iterations to be performed def myscript(iteration_number): #stuff that the code actually does def main(unused_command_line_args): for i in xrange(boot): myscript(i) return 0 if __name__ == '__main__': sys.exit(main(sys.argv)) or where can I read about

Controlling node mapping of MPI_COMM_SPAWN

穿精又带淫゛_ 提交于 2020-01-07 05:39:10
问题 The context: This whole issue can be summarized that I'm trying replicate the behaviour of a call to system (or fork ), but in an mpi environment. (Turns out that you can't call system in parallel.) Meaning I have a program running on many nodes, one process on each node, and then I want each process to call an external program (so for n nodes I'd have n copies of the external program running), wait for all those copies to finish, then keep running the original program. To achieve this in a

Download many files in parallel? (Linux/Python?)

喜欢而已 提交于 2020-01-07 05:26:06
问题 I have a big list of remote file locations and local paths where I would like them to end up. Each file is small, but there are very many of them. I am generating this list within Python. I would like to download all of these files as quickly as possible (in parallel) prior to unpacking and processing them. What is the best library or linux command-line utility for me to use? I attempted to implement this using multiprocessing.pool, but that did not work with the FTP library. I looked into

Multithreading calculations with state and mutable objects in a class

牧云@^-^@ 提交于 2020-01-07 04:24:17
问题 I'm learning about multi threading, however I can't figure out how can I achieve thread-safe in my study scenario. This is a pseudo code, so I have a class that do some calculations and share some public data (the Formulas list), and have a public property Id to read further: class Problem{ public int Id {get; set;} public IList<Calc> Formulas {get; private set;} public Problem(int id){ Id = id; Formulas = new List<Formulas>(); } public void SolveProblem(){ Calc newCalc = DoSomeCalculations()

2 Async tasks in parallel and waiting for results - .Net

穿精又带淫゛_ 提交于 2020-01-07 03:09:11
问题 I intend to run two tasks in parallel and wait for both of them to finish. Here is my two tasks: Private Async Function tempWorker1() As Task For i = 1 To 5000 If i Mod 100 = 0 Then Console.WriteLine("From 1:{0}", i) End If Next End Function Private Async Function tempWorker2() As Task For i = 1 To 5000 If i Mod 100 = 0 Then Console.WriteLine("From 2:{0}", i) End If Next End Function I run them as: Dim task1 As Task = tempWorker1() Dim task2 As Task = tempWorker2() Await Task.WhenAll(task1,