multiprocess

Fill up a dictionary in parallel with multiprocessing

妖精的绣舞 提交于 2019-12-01 17:28:54
Yesterday i asked a question: Reading data in parallel with multiprocess I got very good answers, and i implemented the solution mentioned in the answer i marked as correct. def read_energies(motif): os.chdir("blabla/working_directory") complx_ener = pd.DataFrame() # complex function to fill that dataframe lig_ener = pd.DataFrame() # complex function to fill that dataframe return motif, complx_ener, lig_ener COMPLEX_ENERGIS = {} LIGAND_ENERGIES = {} p = multiprocessing.Pool(processes=CPU) for x in p.imap_unordered(read_energies, peptide_kd.keys()): COMPLEX_ENERGIS[x[0]] = x[1] LIGAND_ENERGIES

fgets() call with redirection get abnormal data stream

最后都变了- 提交于 2019-12-01 05:21:24
I was about to write a shell with C language. Here is the source code below: #include <unistd.h> #include <stdio.h> #include <string.h> #include <sys/wait.h> #include <stdlib.h> int getcmd(char *buf, int nbuf) { memset(buf, 0, nbuf); fgets(buf, nbuf, stdin); printf("pid: %d, ppid: %d\n", getpid(), getppid()); printf("buf: %s", buf); if(buf[0] == 0) {// EOF printf("end of getcmd\n"); return -1; } return 0; } int main(void) { static char buf[100]; int fd, r, ret; // Read and run input commands. while((ret = getcmd(buf, sizeof(buf))) >= 0){ if(fork() == 0) exit(0); wait(&r); } exit(0); } When I

Parallelize these nested for loops in python

▼魔方 西西 提交于 2019-12-01 04:46:52
I have a multidimensional array ( result ) that should be filled by some nested loops. Function fun() is a complex and time-consuming function. I want to fill my array elements in a parallel manner, so I can use all my system's processing power. Here's the code: import numpy as np def fun(x, y, z): # time-consuming computation... # ... return output dim1 = 10 dim2 = 20 dim3 = 30 result = np.zeros([dim1, dim2, dim3]) for i in xrange(dim1): for j in xrange(dim2): for k in xrange(dim3): result[i, j, k] = fun(i, j, k) My question is that "Can I parallelize this code or not? if yes, How?" I'm using

Is it possible to know which SciPy / NumPy functions run on multiple cores?

ε祈祈猫儿з 提交于 2019-12-01 04:27:51
问题 I am trying to figure out explicitly which of the functions in SciPy/NumPy run on multiple processors. I can e.g. read in the SciPy reference manual that SciPy uses this, but I am more interested in exactly which functions do run parallel computations, because not all of them do. The dream scenario would of course be if it is included when you type help(SciPy.foo), but this does not seem to be the case. Any help will be much appreciated. Best, Matias 回答1: I think the question is better

Performance difference for multi-thread and multi-process

只谈情不闲聊 提交于 2019-11-30 18:55:11
A few years ago, in the Windows environment, I did some testing, by letting multiple instances of CPU computation intensive + memory access intensive + I/O access intensive application run. I developed 2 versions: One is running under multi-processing, another is running under multi-threading. I found that the performance is much better for multi-processing. I read somewhere else (but I can't remember the site). Which states that the reason is that under multi-threading, they are "fighting" for a single memory pipeline and I/O pipeline, which makes the performance worse compared to multi

How to use printf() in multiple threads

旧巷老猫 提交于 2019-11-30 11:32:11
问题 I am implementing a multithreaded program that uses different cores, and many threads are executed simultaneously. Each thread makes a printf() call, and the result is not readable. How can I make printf() atomic, so that a printf() call in one thread doesn't conflict with a printf() call in another? 回答1: POSIX Specifications The POSIX specification includes these functions: getc_unlocked() getchar_unlocked() putc_unlocked() putchar_unlock() Versions of the functions getc() , getchar() , putc

Python: concurrent.futures How to make it cancelable?

别来无恙 提交于 2019-11-30 08:19:58
问题 Python concurrent.futures and ProcessPoolExecutor provide a neat interface to schedule and monitor tasks. Futures even provide a .cancel() method: cancel() : Attempt to cancel the call. If the call is currently being executed and cannot be cancelled then the method will return False, otherwise the call will be cancelled and the method will return True. Unfortunately in a simmilar question (concerning asyncio) the answer claims running tasks are uncancelable using this snipped of the

Python, using multiprocess is slower than not using it

不打扰是莪最后的温柔 提交于 2019-11-30 05:12:45
After spending a lot of time trying to wrap my head around multiprocessing I came up with this code which is a benchmark test: Example 1: from multiprocessing import Process class Alter(Process): def __init__(self, word): Process.__init__(self) self.word = word self.word2 = '' def run(self): # Alter string + test processing speed for i in range(80000): self.word2 = self.word2 + self.word if __name__=='__main__': # Send a string to be altered thread1 = Alter('foo') thread2 = Alter('bar') thread1.start() thread2.start() # wait for both to finish thread1.join() thread2.join() print(thread1.word2)

Parallelize these nested for loops in python

非 Y 不嫁゛ 提交于 2019-11-30 04:47:50
问题 I have a multidimensional array ( result ) that should be filled by some nested loops. Function fun() is a complex and time-consuming function. I want to fill my array elements in a parallel manner, so I can use all my system's processing power. Here's the code: import numpy as np def fun(x, y, z): # time-consuming computation... # ... return output dim1 = 10 dim2 = 20 dim3 = 30 result = np.zeros([dim1, dim2, dim3]) for i in xrange(dim1): for j in xrange(dim2): for k in xrange(dim3): result[i

How to use printf() in multiple threads

|▌冷眼眸甩不掉的悲伤 提交于 2019-11-30 01:13:08
I am implementing a multithreaded program that uses different cores, and many threads are executed simultaneously. Each thread makes a printf() call, and the result is not readable. How can I make printf() atomic, so that a printf() call in one thread doesn't conflict with a printf() call in another? POSIX Specifications The POSIX specification includes these functions: getc_unlocked() getchar_unlocked() putc_unlocked() putchar_unlock() Versions of the functions getc() , getchar() , putc() , and putchar() respectively named getc_unlocked() , getchar_unlocked() , putc_unlocked() , and putchar