python-multithreading

ThreadPoolExecutor: how to limit the queue maxsize?

柔情痞子 提交于 2019-12-05 09:45:55
I am using ThreadPoolExecutor class from the concurrent.futures package def some_func(arg): # does some heavy lifting # outputs some results from concurrent.futures import ThreadPoolExecutor with ThreadPoolExecutor(max_workers=1) as executor: for arg in range(10000000): future = executor.submit(some_func, arg) but I need to limit the queue size somehow, as I don't want millions of futures to be created at once, is there a simple way to do it or should I stick to queue.Queue and threading package to accomplish this? Python's ThreadPoolExecutor doesn't have the feature you're looking for, but

Chunksize irrelevant for multiprocessing / pool.map in Python?

梦想与她 提交于 2019-12-05 07:48:51
问题 I try to utilize the pool multiprocessing functionality of python. Independent how I set the chunk size (under Windows 7 and Ubuntu - the latter see below with 4 cores), the amount of parallel threads seems to stay the same. from multiprocessing import Pool from multiprocessing import cpu_count import multiprocessing import time def f(x): print("ready to sleep", x, multiprocessing.current_process()) time.sleep(20) print("slept with:", x, multiprocessing.current_process()) if __name__ == '_

What's the benefits of multi-processing when we already have mult-threading?

孤街醉人 提交于 2019-12-05 07:12:41
问题 I'm confused whether using multiple processes for a web application will improve the performance. Apache's mod_wsgi provides an option to set the number of processes to be started for the daemon process group. I used fastcgi with lighttpd before and it also had an option to configure the max number of processes for each fastcgi application. While I don't know how multi-processing is better, I do know something bad about it compared to single-process multi-threading model. For example, logging

threading.Event().wait() is not notified after calling event.set()

蹲街弑〆低调 提交于 2019-12-05 06:29:09
问题 I came across a very strange issue when using threading.Event() and couldn't understand what is going on? I must have missed something, please can you point it out? I have a Listener class which shares the same event object with signal handler, here is my simplified code: import threading, time class Listener(object): def __init__(self, event): super(Listener, self).__init__() self.event = event def start(self): while not self.event.is_set(): print("Listener started, waiting for messages ..."

Key echo in Python in separate thread doesn't display first key stroke

﹥>﹥吖頭↗ 提交于 2019-12-05 06:18:51
I would try to post a minimal working example, but unfortunately this problem just requires a lot of pieces so I have stripped it down best I can. First of all, I'm using a simple script that simulates pressing keys through a function call. This is tweaked from here . import ctypes SendInput = ctypes.windll.user32.SendInput PUL = ctypes.POINTER(ctypes.c_ulong) class KeyBdInput(ctypes.Structure): _fields_ = [("wVk", ctypes.c_ushort), ("wScan", ctypes.c_ushort), ("dwFlags", ctypes.c_ulong), ("time", ctypes.c_ulong), ("dwExtraInfo", PUL)] class HardwareInput(ctypes.Structure): _fields_ = [("uMsg"

To execute a function every x minutes: sched or threading.Timer?

北城余情 提交于 2019-12-05 04:10:27
I need to program the execution of a give method every x minutes. I found two ways to do it: the first is using the sched module, and the second is using Threading.Timer . First method : import sched, time s = sched.scheduler(time.time, time.sleep) def do_something(sc): print "Doing stuff..." # do your stuff sc.enter(60, 1, do_something, (sc,)) s.enter(60, 1, do_something, (s,)) s.run() The second : import threading def do_something(sc): print "Doing stuff..." # do your stuff t = threading.Timer(0.5,do_something).start() do_something(sc) What's the difference and if there is one better than

Limiting number of HTTP requests per second on Python

南笙酒味 提交于 2019-12-05 03:08:37
问题 I've written a script that fetches URLs from a file and sends HTTP requests to all the URLs concurrently. I now want to limit the number of HTTP requests per second and the bandwidth per interface ( eth0 , eth1 , etc.) in a session. Is there any way to achieve this on Python? 回答1: You could use Semaphore object which is part of the standard Python lib: python doc Or if you want to work with threads directly, you could use wait([timeout]). There is no library bundled with Python which can work

Where is the memory leak? How to timeout threads during multiprocessing in python?

隐身守侯 提交于 2019-12-05 02:24:01
It is unclear how to properly timeout workers of joblib's Parallel in python. Others have had similar questions here , here , here and here . In my example I am utilizing a pool of 50 joblib workers with threading backend. Parallel Call (threading): output = Parallel(n_jobs=50, backend = 'threading') (delayed(get_output)(INPUT) for INPUT in list) Here, Parallel hangs without errors as soon as len(list) <= n_jobs but only when n_jobs => -1 . In order to circumvent this issue, people give instructions on how to create a timeout decorator to the Parallel function ( get_output(INPUT) ) in the

python: tkinter to display video from webcam and do a QR scan

这一生的挚爱 提交于 2019-12-05 02:04:47
I have been trying to create a tkinter top level window that streams video form webcam and do a QR scan. I got this QR scan code from SO and another code that just updates images from webcam instead of streaming the video on a tkinter label. and i tried to combine these both so that a toplevel window with a label updating image from webcam and a close button to close the toplevel window. And while it streams the images, it can scan for QR code and if a scan is successful, the webcam and the toplevel window gets closed. here is what i tried. import cv2 import cv2.cv as cv import numpy import

Does using the subprocess module release the python GIL?

末鹿安然 提交于 2019-12-05 01:19:15
When calling a linux binary which takes a relatively long time through Python's subprocess module, does this release the GIL? I want to parallelise some code which calls a binary program from the command line. Is it better to use threads (through threading and a multiprocessing.pool.ThreadPool ) or multiprocessing ? My assumption is that if subprocess releases the GIL then choosing the threading option is better. When calling a linux binary which takes a relatively long time through Python's subprocess module, does this release the GIL? Yes, it releases the Global Interpreter Lock (GIL) in the