python-multithreading

Python multiprocessing: dealing with 2000 processes

无人久伴 提交于 2019-12-13 03:46:03
问题 Following is my multi processing code. regressTuple has around 2000 items. So, the following code creates around 2000 parallel processes. My Dell xps 15 laptop crashes when this is run. Can't python multi processing library handle the queue according to hardware availability and run the program without crashing in minimal time? Am I not doing this correctly? Is there a API call in python to get the possible hardware process count? How can I refactor the code to use an input variable to get

Is there a way to run cpython on a diffident thread without risking a crash?

风格不统一 提交于 2019-12-13 03:37:55
问题 I have a program that runs lots of urllib requests IN AN INFINITE LOOP, which makes my program really slow, so I tried putting them as threads. Urllib uses cpython deep down in the socket module, so the threads that are being created just add up and do nothing because python's GIL prevents a two cpython commands from being executed in diffident threads at the same time. I am running Windows XP with Python 2.5, so I can't use the multiprocess module. I tried looking at the subproccess module

Stop all threads using threading.Event()

為{幸葍}努か 提交于 2019-12-13 03:36:12
问题 I am creating a multiple threads to execute a function that generates PDF. This process takes a lot of time, so the user has a choice to cancel the execution. To stop a thread, I know that I can use threading.Event() to check if it will be set. However, the process of the function I am executing in my event loop is straight forward/linear (There is no loop to check regularly if the Event is set). --threading class-- def execute_function(self, function_to_execute, total_executions, execution

Nonblocking queue of threads

送分小仙女□ 提交于 2019-12-13 03:29:46
问题 I want to create simple queue of threads. THREADS WILL START WITH POST REQUEST I created simple example without requests. I tried to join threads, but it doesn't work as i want. def a(): print('start a') sleep(5) print('end a') def b(): print('start b') sleep(5) print('end b') t = Thread(target=a) t.start() t.join() print('test1') t = Thread(target=b) t.start() t.join() print('test2') Result of code: start a end a test1 start b end b test2 Expectation: start a test1 end a start b test2 end b

How to use pandas DataFrame in shared memory during multiprocessing?

给你一囗甜甜゛ 提交于 2019-12-13 03:27:00
问题 In one answer to: Is shared readonly data copied to different processes for multiprocessing? a working solution for shared memory for a numpy array is given. How would the same look like if a pandas DataFrame should be used? Background: I would like to be able to write to the DataFrame during multiprocessing and would like to be able to process it further after the multiprocessing has finished. 来源: https://stackoverflow.com/questions/53320422/how-to-use-pandas-dataframe-in-shared-memory

Why python multiprocessing manager produce threading locks?

痞子三分冷 提交于 2019-12-13 02:33:19
问题 >>> import multiprocessing >>> print multiprocessing.Manager().Lock() <thread.lock object at 0x7f64f7736290> >>> type(multiprocessing.Lock()) <class 'multiprocessing.synchronize.Lock'> Why the produced object is a thread.lock and not a multiprocessing.synchronize.Lock as it would be expected from a multiprocessing object? 回答1: Managed objects are always proxies; the goal of the manager is to make non-multiprocessing-aware objects into multiprocessing aware. There is no point in doing this for

Create, manage and kill background tasks in flask app

这一生的挚爱 提交于 2019-12-13 02:12:46
问题 I'm building flask web app where user can start and manage processes. These processes are doing some heavy computation (could be even days). While process is running, it saves some partial results into file to work with. So when user starts new process, I spawn new thread and save thread handle into flask.g global variable. def add_thread(thread_handle): ctx = app.app_context() threads = flask.g.get("threads", []) threads.append(thread_handle) g.threads = threads ctx.push() Later when needed,

How do I tell my main GUI to wait on a worker thread?

。_饼干妹妹 提交于 2019-12-13 00:49:21
问题 I have successfully outsourced an expensive routine in my PyQT4 GUI to a worker QThread to prevent the GUI from going unresponsive. However, I would like the GUI to wait until the worker thread is finished processing to continue executing its own code. The solution that immediately comes to my mind is to have the thread emit a signal when complete (as I understand, QThreads already do this), and then look for this signal in the main window before the rest of the code is executed. Is this

Equivalent of thread.interrupt_main() in Python 3

前提是你 提交于 2019-12-12 21:34:28
问题 In Python 2 there is a function thread.interrupt_main() , which raises a KeyboardInterrupt exception in the main thread when called from a subthread. This is also available through _thread.interrupt_main() in Python 3, but it's a low-level "support module", mostly for use within other standard modules. What is the modern way of doing this in Python 3, presumably through the threading module, if there is one? 回答1: Well raising an exception manually is kinda low-level, so if you think you have

limit number of threads working in parallel

落花浮王杯 提交于 2019-12-12 16:14:58
问题 am making a function to copy file from local machine to remote creating thread to do sftp in parallel def copyToServer(): //does copy file given host name and credentials for i in hostsList: hostname = i username = defaultLogin password = defaultPassword thread = threading.Thread(target=copyToServer, args=(hostname, username, password, destPath, localPath)) threadsArray.append(thread) thread.start() this creates thread and does start copying in parallel but i want to limit it to process like