python-multithreading

Semaphores on Python

帅比萌擦擦* 提交于 2019-11-29 11:59:51
问题 I've started programming in Python a few weeks ago and was trying to use Semaphores to synchronize two simple threads, for learning purposes. Here is what I've got: import threading sem = threading.Semaphore() def fun1(): while True: sem.acquire() print(1) sem.release() def fun2(): while True: sem.acquire() print(2) sem.release() t = threading.Thread(target = fun1) t.start() t2 = threading.Thread(target = fun2) t2.start() But it keeps printing just 1's. How can I intercale the prints? 回答1: It

How terminate Python thread without checking flag continuously

▼魔方 西西 提交于 2019-11-29 11:06:22
class My_Thread(threading.Thread): def __init__(self): threading.Thread.__init__(self) def run(self): print "Starting " + self.name cmd = [ "bash", 'process.sh'] p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT) for line in iter(p.stdout.readline, b''): print ("-- " + line.rstrip()) print "Exiting " + self.name def stop(self): print "Trying to stop thread " self.run = False thr = My_Thread() thr.start() time.sleep(30) thr.stop() thr.join() So i have thread like show above, actually work on windows and process.sh is bash script which run in cygwin and takes around 5

Are there any built-in functions which block on I/O that don't allow other threads to run?

|▌冷眼眸甩不掉的悲伤 提交于 2019-11-29 09:30:04
问题 I came across this interesting statement in "Caveats" section of the documentation for the thread module today: Not all built-in functions that may block waiting for I/O allow other threads to run. (The most popular ones ( time.sleep() , file.read() , select.select() ) work as expected.) Pretty much everywhere else I've ever seen Python threads discussed, there has always been an assumption that all built-in functions which do I/O will release the GIL, meaning other threads can run while the

How to return data from a CherryPy BackgroundTask running as fast as possible

杀马特。学长 韩版系。学妹 提交于 2019-11-29 08:43:11
I'm building a web service for iterative batch processing of data using CherryPy. The ideal workflow is as follows: Users POST data to the service for processing When the processing job is free, it collects the queued data and starts another iteration While the job is processing, users are POSTing more data to the queue for the next iteration Once the current iteration is finished, the results are passed back so that users can GET them using the same API. The job starts again with the next batch of queued data. The key consideration here is that the processing should run as fast as possible

How to thread multiple subprocess instances in Python 2.7?

爱⌒轻易说出口 提交于 2019-11-29 08:13:30
I have three commands that would otherwise be easily chained together on the command-line like so: $ echo foo | firstCommand - | secondCommand - | thirdCommand - > finalOutput In other words, the firstCommand processes foo from standard input and pipes the result to secondCommand , which in turn processes that input and pipes its output to thirdCommand , which does processing and redirects its output to the file finalOutput . I have been trying to recapitulate this in a Python script, using threading. I'd like to use Python in order to manipulate the output from firstCommand before passing it

Process vs. Thread with regards to using Queue()/deque() and class variable for communication and “poison pill”

跟風遠走 提交于 2019-11-29 08:13:16
I would like to create either a Thread or a Process which runs forever in a While True loop. I need to send and receive data to the worker in the form for queues, either a multiprocessing.Queue() or a collections.deque(). I prefer to use collections.deque() as it is significantly faster. I also need to be able to kill the worker eventually (as it runs in a while True loop. Here is some test code I've put together to try and understand the differences between Threads, Processes, Queues, and deque .. import time from multiprocessing import Process, Queue from threading import Thread from

sharing a :memory: database between different threads in python using sqlite3 package

試著忘記壹切 提交于 2019-11-29 02:56:25
问题 I would like to create a :memory: database in python and access it from different threads. Essentially something like: class T(threading.Thread): def run(self): self.conn = sqlite3.connect(':memory:') # do stuff with the database for i in xrange(N): T().start() and have all the connections referring to the same database. I am aware of passing check_same_thread=True to the connect function and sharing the connection between threads but would like to avoid doing that if possible. Thanks for any

Pass keyword arguments to target function in Python threading.Thread

纵然是瞬间 提交于 2019-11-29 02:51:32
I want to pass named arguments to the target function, while creating a Thread object. Following is the code that I have written: import threading def f(x=None, y=None): print x,y t = threading.Thread(target=f, args=(x=1,y=2,)) t.start() I get a syntax error for "x=1", in Line 6. I want to know how I can pass keyword arguments to the target function. t = threading.Thread(target=f, kwargs={'x': 1,'y': 2}) this will pass a dictionary with the keyword arguments' names as keys and argument values as values in the dictionary. the other answer above won't work, because the "x" and "y" are undefined

numpy and Global Interpreter Lock

☆樱花仙子☆ 提交于 2019-11-29 01:13:48
I am about to write some computationally-intensive Python code that'll almost certainly spend most of its time inside numpy 's linear algebra functions. The problem at hand is embarrassingly parallel . Long story short, the easiest way for me to take advantage of that would be by using multiple threads. The main barrier is almost certainly going to be the Global Interpreter Lock (GIL). To help design this, it would be useful to have a mental model for which numpy operations can be expected to release the GIL for their duration. To this end, I'd appreciate any rules of thumb, dos and don'ts,

Can't create new threads in Python

那年仲夏 提交于 2019-11-29 00:17:11
import threading threads = [] for n in range(0, 60000): t = threading.Thread(target=function,args=(x, n)) t.start() threads.append(t) for t in threads: t.join() It is working well for range up to 800 on my laptop, but if I increase range to more than 800 I get the error can't create new thread . How can I control number to threads to get created or any other way to make it work like timeout? I tried using threading.BoundedSemaphore function but that doesn't seem to work properly. The problem is that no major platform (as of mid-2013) will let you create anywhere near this number of threads.