queue

ArrayDeque and LinkedBlockingDeque

江枫思渺然 提交于 2019-12-04 14:38:46
Just wondering why they made a LinkedBlockingDeque while the same non concurrent counterpart is an ArrayDeque which is backed on a resizable array. LinkedBlockingQueue use a set of nodes like a LinkedList (even though not implementing List ). I am aware of the possibility to use an ArrayBlockingQueue but what if one wanted to use an ArrayBlockingDeque ? Why is there not such an option? Thanks in advance. This May not be a proper question w.r.t stackoverflow. But i would Like to say something about these implementations. -> First thing We need to answer why we give Different implementations for

Best way to update Activity from a Queue

独自空忆成欢 提交于 2019-12-04 14:16:29
I have a LinkedBlockingQueue in a Mediator in my "Producer-Mediator-Consumer" model. Producer first updates the Mediator adding to the activityQueue. Next the Consumer/Activity waits/listens on the queue and grabs the next item. I want an Activity to see the queue size has changed and grab the next item. The Mediator has no visibility into the activity only the activity can see the Mediator. So how do I create this listener mechanism i want? Here is my mediator class that holds the Queue and the Activity will somehow look at the queue and get informed if it needs to update. Data coming into

synchronization across multiple processes in python

坚强是说给别人听的谎言 提交于 2019-12-04 14:14:39
I have a python application that spawns a separate process to do some work (I ran into performance issues using threads due to the GIL (global interpreter lock)). Now what's my methods in python to synchronize shared resources across processes? I move data into a queue and a spawn process does the work when as it receives data from that queue. But I need to be able to guarantee that the data is coming out in an orderly fashion, same order as it was copied in so I need to guarantee that only one process at any time can read/write from/to the queue. How do I do that best? Thanks, Ron I think you

Python Queues memory leaks when called inside thread

做~自己de王妃 提交于 2019-12-04 14:01:52
I have python TCP client and need to send media(.mpg) file in a loop to a 'C' TCP server. I have following code, where in separate thread I am reading the 10K blocks of file and sending it and doing it all over again in loop, I think it is because of my implementation of thread module, or tcp send. I am using Queues to print the logs on my GUI ( Tkinter ) but after some times it goes out of memory. . UPDATE 1 - Added more code as requested Thread class "Sendmpgthread" used to create thread to send data . . def __init__ ( self, otherparams,MainGUI): . . self.MainGUI = MainGUI self.lock =

Need a queue that can support multiple readers

∥☆過路亽.° 提交于 2019-12-04 13:47:53
问题 I need a queue that can be processed by multiple readers. The readers will dequeue an element and send it to a REST service. What's important to note are: Each reader should be dequeueing different elements. If the queue has elements A, B & C, Thread 1 should dequeue A and Thread 2 should dequeue B in concurrent fashion. And so forth until there's nothing in the queue. I understand that it is CPU intensive to always run in busy loop, peeking into the queue for items. So I am not sure if a

How to make the client download a very large file that is genereted on the fly

无人久伴 提交于 2019-12-04 13:43:56
问题 I have an export function that read the entire database and create a .xls file with all the records. Then the file is sent to the client. Of course, the time of export the full database requires a lot of time and the request will soon end in a timeout error. What is the best solution to handle this case? I heard something about making a queue with Redis for example but this will require two requests: one for starting the job that will generate the file and the second to download the generated

What simple mechanism for synchronous Unix pooled processes?

半城伤御伤魂 提交于 2019-12-04 13:36:41
问题 I need to limit the number of processes being executed in parallel. For instance I'd like to execute this psuedo-command line: export POOL_PARALLELISM=4 for i in `seq 100` ; do pool foo -bar & done pool foo -bar # would not complete until the first 100 finished. Therefor despite 101 foo s being queued up to run, only 4 would be running at any given time. pool would fork()/exit() and queue the remaining processes until complete. Is there a simple mechanism to do this with Unix tools? at and

Need some assistance with Python threading/queue

半城伤御伤魂 提交于 2019-12-04 12:06:32
问题 import threading import Queue import urllib2 import time class ThreadURL(threading.Thread): def __init__(self, queue): threading.Thread.__init__(self) self.queue = queue def run(self): while True: host = self.queue.get() sock = urllib2.urlopen(host) data = sock.read() self.queue.task_done() hosts = ['http://www.google.com', 'http://www.yahoo.com', 'http://www.facebook.com', 'http://stackoverflow.com'] start = time.time() def main(): queue = Queue.Queue() for i in range(len(hosts)): t =

How to find the processor queue length in linux

て烟熏妆下的殇ゞ 提交于 2019-12-04 11:52:54
问题 Trying to determine the Processor Queue Length (the number of processes that ready to run but currently aren't) on a linux machine. There is a WMI call in Windows for this metric, but not knowing much about linux I'm trying to mine /proc and 'top' for the information. Is there a way to determine the queue length for the cpu? Edit to add: Microsoft's words concerning their metric: "The collection of one or more threads that is ready but not able to run on the processor due to another active

Can a TensorFlow queue be reopened after it is closed?

↘锁芯ラ 提交于 2019-12-04 11:35:35
问题 I would like to enqueue items, close the queue to ensure that other sessions will dequeue all remaining items, then reopen it later for the next epoch. Is this possible? q = tf.FIFOQueue(...) close_q = q.close() reopen_q = #??? with tf.Session([...]) as sess: [...] sess.run(close_q) [...] sess.run(reopen_q) 回答1: There's no way to re-open a closed queue, but (only if you are using multiple sessions) there is a workaround:: Create your queue in a with tf.container(name): block that wraps only