parallel-processing

OpenMP parallel thread

一个人想着一个人 提交于 2020-01-11 07:17:13
问题 I need to parallelize this loop, I though that to use was a good idea, but I never studied them before. #pragma omp parallel for for(std::set<size_t>::const_iterator it=mesh->NEList[vid].begin(); it!=mesh->NEList[vid].end(); ++it){ worst_q = std::min(worst_q, mesh->element_quality(*it)); } In this case the loop is not parallelized because it uses iterator and the compiler cannot understand how to slit it. Can You help me? 回答1: OpenMP requires that the controlling predicate in parallel for

Bad version or endian-key in MATLAB parfor?

a 夏天 提交于 2020-01-11 04:43:06
问题 I am doing parallel computations with MATALB parfor . The code structure looks pretty much like %%% assess fitness %%% % save communication overheads bitmaps = pop(1, new_indi_idices); porosities = pop(2, new_indi_idices); mid_fitnesses = zeros(1, numel(new_indi_idices)); right_fitnesses = zeros(1, numel(new_indi_idices)); % parallelization starts parfor idx = 1:numel(new_indi_idices) % only assess the necessary bitmap = bitmaps{idx}; if porosities{idx}>POROSITY_MIN && porosities{idx}

Apache + PHP multiple scripts at the same time

半城伤御伤魂 提交于 2020-01-11 03:53:08
问题 Good day. For first, sorry for my bad English =) So. I created script: <? sleep(10); ?> My Apache has MPM module, I obviously didn't use sessions in this script, just.. just sleep(10). When I open 2 tabs in my browser simultaneously, first tab loads in 10 seconds, second tab - 20 seconds. But. When I open this script in 2 different browsers at the same time, it loads in each one after 10 seconds. So, I started thinking, that my problem is "Connection: Keep-Alive". I changed my script: <?

How to parallelize a sum calculation in python numpy?

寵の児 提交于 2020-01-10 20:05:41
问题 I have a sum that I'm trying to compute, and I'm having difficulty parallelizing the code. The calculation I'm trying to parallelize is kind of complex (it uses both numpy arrays and scipy sparse matrices). It spits out a numpy array, and I want to sum the output arrays from about 1000 calculations. Ideally, I would keep a running sum over all the iterations. However, I haven't been able to figure out how to do this. So far, I've tried using joblib's Parallel function and the pool.map

How to use python to query database in parallel

你说的曾经没有我的故事 提交于 2020-01-10 19:59:07
问题 I have two functions which I use to query database. Assuming two separate queries, how to run these in parallel to query same database, and also wait for both results to return before continuing the execution of the rest of the code? def query1(param1, param2): result = None logging.info("Connecting to database...") try: conn = connect(host=host, port=port, database=db) curs = conn.cursor() curs.execute(query) result = curs curs.close() conn.close() except Exception as e: logging.error(

Broken pipe error with multiprocessing.Queue

本秂侑毒 提交于 2020-01-10 14:17:08
问题 In python2.7, multiprocessing.Queue throws a broken error when initialized from inside a function. I am providing a minimal example that reproduces the problem. #!/usr/bin/python # -*- coding: utf-8 -*- import multiprocessing def main(): q = multiprocessing.Queue() for i in range(10): q.put(i) if __name__ == "__main__": main() throws the below broken pipe error Traceback (most recent call last): File "/usr/lib64/python2.7/multiprocessing/queues.py", line 268, in _feed send(obj) IOError:

Broken pipe error with multiprocessing.Queue

廉价感情. 提交于 2020-01-10 14:16:22
问题 In python2.7, multiprocessing.Queue throws a broken error when initialized from inside a function. I am providing a minimal example that reproduces the problem. #!/usr/bin/python # -*- coding: utf-8 -*- import multiprocessing def main(): q = multiprocessing.Queue() for i in range(10): q.put(i) if __name__ == "__main__": main() throws the below broken pipe error Traceback (most recent call last): File "/usr/lib64/python2.7/multiprocessing/queues.py", line 268, in _feed send(obj) IOError:

Corresponding Receive Routine of MPI_Bcast

烂漫一生 提交于 2020-01-10 13:32:29
问题 What would be the corresponding MPI receive routine of the broadcast routine, MPI_Bcast. Namely, one processor broadcasts a message to a group, let's say all world, how I can have the message in these processes? Thank you. Regards SRec 回答1: MPI_Bcast is both the sender and the receiver call. Consider the prototype for it. int MPI_Bcast ( void *buffer, int count, MPI_Datatype datatype, int root, MPI_Comm comm ) All machines except for the machine with id = root are receivers. The machine that

Matlab parallel computing toolbox, dynamic allocation of work in parfor loops

别来无恙 提交于 2020-01-10 05:05:31
问题 I'm working with a long running parfor loop in matlab. parfor iter=1:1000 chunk_of_work(iter); end There are generally about 2-3 timing outliers per run. That is to say for every 1000 chunks of work performed there are 2-3 that take about 100 times longer than the rest. As the loop nears completion, the workers that evaluated the outliers continue to run while the rest of the workers have no computational load. This is consistent with the parfor loop distributing work statically. This is in

Parallel event handling in C#

纵然是瞬间 提交于 2020-01-10 04:29:30
问题 I’m developing a module that has to handle many events coming from an external system. I’ve to use a third party class providing an event (OnNewMessage) passing some parameters as input and two as output, each event require a bunch of time in order to be processed. I’d like to serve these events in a parallel way, in order to avoid blocking the caller and to process multiple request in parallel. Here an example of my code: void Init() { provider.OnNewMessage += new OnMessageEventHandler