python-multithreading

How to pass a sqlite Connection Object through multiprocessing

久未见 提交于 2020-01-03 13:05:32
问题 I'm testing out how multiprocessing works and would like an explanation why I'm getting this exception and if it is even possible to pass the sqlite3 Connection Object this way: import sqlite3 from multiprocessing import Queue, Process def sql_query_worker(conn, query_queue): # Creating the Connection Object here works... #conn = sqlite3.connect('test.db') while True: query = query_queue.get() if query == 'DO_WORK_QUIT': break c = conn.cursor() print('executing query: ', query) c.execute

Why threading increase processing time?

一个人想着一个人 提交于 2020-01-02 07:41:09
问题 I was working on multitasking a basic 2-D DLA simulation. Diffusion Limited Aggregation (DLA) is when you have particles performing a random walk and aggregate when they touch the current aggregate. In the simulation, I have 10.000 particles walking to a random direction at each step. I use a pool of worker and a queue to feed them. I feed them with a list of particles and the worker perform the method .updatePositionAndggregate() on each particle. If I have one worker, I feed it with a list

Does using the subprocess module release the python GIL?

…衆ロ難τιáo~ 提交于 2020-01-02 01:09:30
问题 When calling a linux binary which takes a relatively long time through Python's subprocess module, does this release the GIL? I want to parallelise some code which calls a binary program from the command line. Is it better to use threads (through threading and a multiprocessing.pool.ThreadPool ) or multiprocessing ? My assumption is that if subprocess releases the GIL then choosing the threading option is better. 回答1: When calling a linux binary which takes a relatively long time through

AttributeError: 'DisabledBackend' object has no attribute '_get_task_meta_for'

我的梦境 提交于 2020-01-01 10:52:08
问题 I am trying to read meta info from celery task in case of timeout (if task is not finished in given time). I have 3 celery workers. When I execute tasks on 3 workers serially my timeout logic (getting meta info from redis backend) works fine. But, when I execute tasks in parallel using threads, I get error 'AttributeError: 'DisabledBackend' object has no attribute '_get_task_meta_for''. main script. from threading import Thread from util.tasks import app from celery.exceptions import

AttributeError: 'DisabledBackend' object has no attribute '_get_task_meta_for'

冷暖自知 提交于 2020-01-01 10:51:49
问题 I am trying to read meta info from celery task in case of timeout (if task is not finished in given time). I have 3 celery workers. When I execute tasks on 3 workers serially my timeout logic (getting meta info from redis backend) works fine. But, when I execute tasks in parallel using threads, I get error 'AttributeError: 'DisabledBackend' object has no attribute '_get_task_meta_for''. main script. from threading import Thread from util.tasks import app from celery.exceptions import

is there any pool for ThreadingMixIn and ForkingMixIn for SocketServer?

心不动则不痛 提交于 2020-01-01 09:42:21
问题 I was trying to make an http proxy using BaseHttpServer which is based on SocketServer which got 2 asynchronous Mixins (ThreadingMixIn and ForkingMixIn) the problem with those two that they work on each request (allocate a new thread or fork a new subprocess for each request) is there a Mixin that utilize a pool of let's say 4 subprocesses and 40 threads in each so requests get handled by those already created threads ? because this would be a big performance gain and I guess it would save

is there any pool for ThreadingMixIn and ForkingMixIn for SocketServer?

僤鯓⒐⒋嵵緔 提交于 2020-01-01 09:42:08
问题 I was trying to make an http proxy using BaseHttpServer which is based on SocketServer which got 2 asynchronous Mixins (ThreadingMixIn and ForkingMixIn) the problem with those two that they work on each request (allocate a new thread or fork a new subprocess for each request) is there a Mixin that utilize a pool of let's say 4 subprocesses and 40 threads in each so requests get handled by those already created threads ? because this would be a big performance gain and I guess it would save

How to find running time of a thread in Python

一世执手 提交于 2020-01-01 05:18:17
问题 I have a multi-threaded SMTP server. Each thread takes care of one client. I need to set a timeout value of 10 seconds on each server thread to terminate dormant or misbehaving clients. I have used the time.time() , to find the start time and my checkpoint time and the difference gives the running time. But I believe it gives the system time and not the time this thread was running. Is there a Thread local timer API in Python ? import threading stop = 0 def hello(): stop = 1 t=threading.Timer

Keras “pickle_safe”: What does it mean to be “pickle safe”, or alternatively, “non picklable” in Python?

旧巷老猫 提交于 2020-01-01 05:02:29
问题 Keras fit_generator() has a parameter pickle_safe which defaults to False . Training can run faster if it is pickle_safe, and accordingly set the flag to True ? According to Kera's docs: pickle_safe : If True, use process based threading. Note that because this implementation relies on multiprocessing, you should not pass non picklable arguments to the generator as they can't be passed easily to children processes. I don't understand exactly what this is saying. How can I determine if my

combining python watchdog with multiprocessing or threading

六月ゝ 毕业季﹏ 提交于 2019-12-31 23:25:35
问题 I'm using Python's Watchdog to monitor a given directory for new files being created. When a file is created, some code runs that spawns a subprocess shell command to run different code to process this file. This should run for every new file that is created. I've tested this out when one file is created, and things work great, but am having trouble getting it working when multiple files are created, either at the same time, or one after another. My current problem is this... the processing