python-multithreading

Multithreaded file read python

穿精又带淫゛_ 提交于 2019-12-10 12:27:55
问题 import threading def read_file(): f = open('text.txt') for line in f: print line.strip() ,' : ', threading.current_thread().getName() if __name__ == '__main__': threads = [] for i in range(15): t = threading.Thread(target=read_file) threads.append(t) t.start() Question: Will each thread read each line only once from the file above or there are chances that a given thread can end up reading a line twice? My understanding was that a thread started later will overwrite the file handle for the

Using Pycuda Multiple Threads

家住魔仙堡 提交于 2019-12-10 12:16:07
问题 I'm trying to run multiple threads on GPUs using the Pycuda example MultipleThreads. When I run my python file, I get the following error message: (/root/anaconda3/) root@109c7b117fd7:~/pycuda# python multiplethreads.py Exception in thread Thread-5: Traceback (most recent call last): File "/root/anaconda3/lib/python3.6/threading.py", line 916, in _bootstrap_inner self.run() File "multiplethreads.py", line 22, in run test_kernel(self.array_gpu) File "multiplethreads.py", line 36, in test

Threading in Python takes longer time instead of making it faster?

青春壹個敷衍的年華 提交于 2019-12-10 11:34:19
问题 I wrote 3 different codes to compare having threads vs. not having threads. Basically measuring how much time I save by using threading and the result didn't make any sense. Here are my codes: import time def Function(): global x x = 0 while x < 300000000: x += 1 print x e1 = time.clock() E1 = time.time() Function() e2 = time.clock() E2 = time.time() print e2 - e1 print E2 - E1 When I ran this, I got this as output: 26.6358742929 26.6440000534 Then I wrote another function as shown below and

How to wait for an input without blocking timer in python?

£可爱£侵袭症+ 提交于 2019-12-10 10:58:49
问题 I need to print a message every x seconds, at the same time , I need to listen to the user's input. If 'q' is pressed, it should kill the program. For example some message . . # after specified interval . some message q # program should end The current problem I face now is that raw_input is blocking which stops the my function from repeating the message. How do I get input reading and my function to run in parallel? EDIT: It turns out that raw_input was not blocking. I misunderstood how

Threading and information passing — how to

柔情痞子 提交于 2019-12-09 18:27:50
问题 To reframe from confusion i have edited the question: one.py import threading count = 5 dev = threading.Thread(name='dev', target=dev,args=(workQueue,count,)) dev.setDaemon(True) dev.start() workQueue = Queue.Queue(10) queueLock.acquire() workQueue.put(word) queueLock.release() count = 3 time.sleep(2) count = 5 but my confusion here is I am able to put and get values from queue between threads but in case of count it does not reflect. Why is that? What is point am actually missing here? class

asyncio: Wait for event from other thread

坚强是说给别人听的谎言 提交于 2019-12-09 15:59:02
问题 I'm designing an application in Python which should access a machine to perform some (lengthy) tasks. The asyncio module seems to be a good choice for everything that is network-related, but now I need to access the serial port for one specific component. I've implemented kind of an abstraction layer for the actual serial port stuff, but can't figure out how to sensibly integrate this with asyncio. Following setup: I have a thread running a loop, which regularly talks to the machine and

Can't read/write to files using multithreading in python

元气小坏坏 提交于 2019-12-08 22:32:13
问题 I have an input file which will contains a long list of URLs. Lets assume this in mylines.txt : https://yahoo.com https://google.com https://facebook.com https://twitter.com What I need to do is: 1) Read a line from the input file mylines.txt 2) Execute myFun function. Which will perform some tasks. And produce an output consists of a line. It is more complex in my real code. But something like this in concept. 3) Write the output to the results.txt file Since I have large input. I need to

Detect failed tasks in concurrent.futures

与世无争的帅哥 提交于 2019-12-08 22:02:43
问题 I've been using concurrent.futures as it has a simple interface and let user easily control the max number of threads/processes. However, it seems like concurrent.futures hides failed tasks and continue the main thread after all tasks finished/failed. import concurrent.futures def f(i): return (i + 's') with concurrent.futures.ThreadPoolExecutor(max_workers=10) as executor: fs = [executor.submit(f, i ) for i in range(10)] concurrent.futures.wait(fs) Calling f on any integer leads an TypeError

python threading blocks

廉价感情. 提交于 2019-12-08 15:11:38
问题 I am trying to write a program which creates new threads in a loop, and doesn't wait for them to finish. As i understand it if i use .start() on the thread, my main loop should just continue, and the other thread will go off and do its work at the same time However once my new thread starts, the loop blocks until the thread completes. Have I misunderstood how threading works in python, or is there something stupid I'm doing. here is my code for creating new threads. def MainLoop(): print

The join() function in threading

旧巷老猫 提交于 2019-12-08 13:50:28
问题 So I recently tried to understand the join() function, but as many tutorials/documentations I read I just don't seem to understand it. Is there maybe someone here who is capable of explaining it to me? 回答1: join method for multithreading Regarding the use of join for threads and multithreading, you would use t.join() . Very simply, all this does for you is you request to join with a thread, which waits for the thread to terminate its run. Your intepreter will stay running until all the