python-multiprocessing

Can I pass queue object in multiprocessing pool starmap method [duplicate]

早过忘川 提交于 2020-02-24 06:33:46
问题 This question already has answers here : Sharing a result queue among several processes (2 answers) Closed 6 months ago . There are many questions in SO regarding passing multiple arguments in python multiprocessing Pool's starmap method. But what I want to ask is if I can send a queue object in the method which can be shared between different processes? I am able to do this using threading and multiprocessing Process method, but not using Pool's starmap method. from multiprocessing import

Can I pass queue object in multiprocessing pool starmap method [duplicate]

时光总嘲笑我的痴心妄想 提交于 2020-02-24 06:33:36
问题 This question already has answers here : Sharing a result queue among several processes (2 answers) Closed 6 months ago . There are many questions in SO regarding passing multiple arguments in python multiprocessing Pool's starmap method. But what I want to ask is if I can send a queue object in the method which can be shared between different processes? I am able to do this using threading and multiprocessing Process method, but not using Pool's starmap method. from multiprocessing import

How to retrieve values from a function run in parallel processes?

你离开我真会死。 提交于 2020-02-21 11:09:32
问题 The Multiprocessing module is quite confusing for python beginners specially for those who have just migrated from MATLAB and are made lazy with its parallel computing toolbox. I have the following function which takes ~80 Secs to run and I want to shorten this time by using Multiprocessing module of Python. from time import time xmax = 100000000 start = time() for x in range(xmax): y = ((x+5)**2+x-40) if y <= 0xf+1: print('Condition met at: ', y, x) end = time() tt = end-start #total time

Can I use map / imap / imap_unordered with functions with no arguments?

陌路散爱 提交于 2020-02-18 13:57:53
问题 Sometimes I need to use multiprocessing with functions with no arguments. I wish I could do something like: from multiprocessing import Pool def f(): # no argument return 1 # TypeError: f() takes no arguments (1 given) print Pool(2).map(f, range(10)) I could do Process(target=f, args=()) , but I prefer the syntax of map / imap / imap_unordered . Is there a way to do that? 回答1: map function's first argument should be a function and it should accept one argument. It is mandatory because, the

Manage Python Multiprocessing with MongoDB

一世执手 提交于 2020-02-05 05:51:05
问题 I'm trying to run my code with a multiprocessing function but mongo keep returning "MongoClient opened before fork. Create MongoClient with connect=False, or create client after forking." I really doesn't understand how i can adapt my code to this. Basically the structure is: db = MongoClient().database db.authenticate('user', 'password', mechanism='SCRAM-SHA-1') collectionW = db['words'] collectionT = db['sinMemo'] collectionL = db['sinLogic'] def findW(word): rows = collectionw.find({"word"

Multiprocessing code works upon import, breaks upon being called

巧了我就是萌 提交于 2020-02-04 08:31:22
问题 In a file called test.py I have print 'i am cow' import multi4 print 'i am cowboy' and in multi4.py I have import multiprocessing as mp manager = mp.Manager() print manager I am confused by the way this code operates. At the command line, if I type python and then in the python environment if I type import test.py I get the expected behavior: Python 2.7.3 (default, Apr 10 2012, 23:31:26) [MSC v.1500 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information

Parallelize a nested for loop in python for finding the max value

二次信任 提交于 2020-02-04 05:48:45
问题 I'm struggling for some time to improve the execution time of this piece of code. Since the calculations are really time-consuming I think that the best solution would be to parallelize the code. The output could be also stored in memory, and written to a file afterwards. I am new to both Python and parallelism, so I find it difficult to apply the concepts explained here and here. I also found this question, but I couldn't manage to figure out how to implement the same for my situation. I am

TypeError: 'tuple' object is not callable while calling multiprocessing in python

两盒软妹~` 提交于 2020-01-30 09:21:09
问题 I am trying to execute the following script by using multiprocessing and queues, from googlefinance import getQuotes from yahoo_finance import Share import multiprocessing class Stock: def __init__(self,symbol,q): self.symbol = symbol self.q = q def current_value(self): current_price =self.q.put(float(getQuotes(self.symbol)[0]['LastTradeWithCurrency'])) return current_price def fundems(self): marketcap = self.q.put(Share(self.symbol).get_market_cap()) bookvalue = self.q.put(Share(self.symbol)

Why scikit-learn neighbors is slower with n_jobs > 1 and forkserver

早过忘川 提交于 2020-01-23 12:56:46
问题 I'm using scikit-learn for doing Metaheuristics exercises and I have a doubt: I need to use knn, so I have a KNearestNeighbors object with n_jobs=-1. As the docs said, I have to set the multiprocessing mode to forkserver. But the knn is soooo slower with n_jobs=-1 than with n_jobs=1. This is some piece of code ### Some initialization here ### skf = StratifiedKFold(target, n_folds=2, shuffle=True) for train_index, test_index in skf: data_train, data_test = data[train_index], data[test_index]

Why scikit-learn neighbors is slower with n_jobs > 1 and forkserver

跟風遠走 提交于 2020-01-23 12:56:20
问题 I'm using scikit-learn for doing Metaheuristics exercises and I have a doubt: I need to use knn, so I have a KNearestNeighbors object with n_jobs=-1. As the docs said, I have to set the multiprocessing mode to forkserver. But the knn is soooo slower with n_jobs=-1 than with n_jobs=1. This is some piece of code ### Some initialization here ### skf = StratifiedKFold(target, n_folds=2, shuffle=True) for train_index, test_index in skf: data_train, data_test = data[train_index], data[test_index]