producer-consumer

Efficient consumer thread with multiple producers

*爱你&永不变心* 提交于 2019-12-05 04:08:46
I am trying to make a producer/consumer thread situation more efficient by skipping expensive event operations if necessary with something like: //cas(variable, compare, set) is atomic compare and swap //queue is already lock free running = false // dd item to queue – producer thread(s) if(cas(running, false, true)) { // We effectively obtained a lock on signalling the event add_to_queue() signal_event() } else { // Most of the time if things are busy we should not be signalling the event add_to_queue() if(cas(running, false, true)) signal_event() } ... // Process queue, single consumer thread

C++11 non-blocking producer/consumer

一个人想着一个人 提交于 2019-12-05 03:57:40
I have a C++11 application with a high-priority thread that's producing data, and a low-priority thread that's consuming it (in my case, writing it to disk). I'd like to make sure the high-priority producer thread is never blocked, i.e. it uses only lock-free algorithms. With a lock-free queue, I can push data to the queue from the producer thread, and poll it from the consumer thread, thus meeting my goals above. I'd like to modify my program so that the consumer thread blocks when inactive instead of polling. It seems like the C++11 condition variable might be useful to block the consumer

Using a named mutex to lock a file

心不动则不痛 提交于 2019-12-05 03:28:01
I'm using a named mutex to lock access to a file (with path 'strFilePath') in a construction like this: private void DoSomethingsWithAFile(string strFilePath) { Mutex mutex = new Mutex(false,strFilePath.Replace("\\","")); try { mutex.WaitOne(); //do something with the file.... } catch(Exception ex) { //handle exception } finally { mutex.ReleaseMutex(); } } So, this way the code will only block the thread when the same file is being processed already. Well, I tested this and seemed to work okay, but I really would like to know your thoughts about this. Since you are talking about a producer

Why GC does not collect unused objects?

夙愿已清 提交于 2019-12-04 22:00:01
I implemented Producer/Consumer pattern with BlockingCollection for my experiment. PerformanceCounter c = null; void Main() { var p =System.Diagnostics.Process.GetCurrentProcess(); c = new PerformanceCounter("Process", "Working Set - Private", p.ProcessName); (c.RawValue/1024).Dump("start"); var blocking = new BlockingCollection<Hede>(); var t = Task.Factory.StartNew(()=>{ for (int i = 0; i < 10000; i++) { blocking.Add(new Hede{ Field = string.Join("",Enumerable.Range(0,100).Select (e => Path.GetRandomFileName())) }); } blocking.CompleteAdding(); }); var t2 = Task.Factory.StartNew(()=>{ int x

Java example of using ExecutorService and PipedReader/PipedWriter (or PipedInputStream/PipedOutputStream) for consumer-producer

落爺英雄遲暮 提交于 2019-12-04 21:46:51
I'm looking for a simple producer - consumer implementation in Java and don't want to reinvent the wheel I couldn't find an example that uses both the new concurrency package and either of the Piped classes Is there an example for using both PipedInputStream and the new Java concurrency package for this? Is there a better way without using the Piped classes for such a task? For your task it might be sufficient to just use a single thread and write to the file using a BufferedOutputStream as you are reading from the database. If you want more control over the buffer size and the size of chunks

Can producer find the additions and removals of brokers in Kafka 0.8?

怎甘沉沦 提交于 2019-12-04 21:34:38
We knowthat, in kafka 0.7, we can specify zk.connect for producer, so producer can find the additions and removals of broker. But in kafka 0.8, we can't specify zk.connect for producer. Can producer in kafka 0.8 find that? If not, the scalability of the system is not worse than the 0.7 version? You can still use a ZooKeeper client to retrieve the broker list: ZkClient zkClient = new ZkClient("localhost:2108", 4000, 6000, new BytesPushThroughSerializer()); List<String> brokerList = zkClient.getChildren("/brokers/ips"); According to that, you do not have to "hardcode" the broker list on client

Java: Producer/Consumer using BlockingQueue: having the consumer thread wait() until another object is queued

不羁的心 提交于 2019-12-04 20:46:52
I've been having some thread related problems recently with a consumer that takes points. Here is the original, which works fine except for taking up a lot of cpu constantly checking the queue. The idea is that cuePoint can be called casually and the main thread keeps going. import java.util.List; import java.util.ArrayList; import java.util.concurrent.ArrayBlockingQueue; import java.util.concurrent.BlockingQueue; public class PointConsumer implements Runnable { public static final int MAX_QUEUE_SIZE=500; BlockingQueue<Point> queue; public PointConsumer (){ this.queue=new ArrayBlockingQueue

Rx how to create a sequence from a pub/sub pattern

倾然丶 夕夏残阳落幕 提交于 2019-12-04 20:35:23
I'm trying to evaluate using Rx to create a sequence from a pub/sub pattern (i.e. classic observer pattern where next element is published by the producer(s)). This is basically the same as .net events, except we need to generalize it such that having an event is not a requirement, so I'm not able to take advantage of Observable.FromEvent. I've played around with Observable.Create and Observable.Generate and find myself end up having to write code to take care of the pub/sub (i.e. I have to write producer/consumer code to stash the published item, then consume it by calling IObserver.OnNext()

Producer/Consumer - producer adds data to collection without blocking, consumer consumes data from collection in batch

限于喜欢 提交于 2019-12-04 19:35:41
I have a Producer/Consumer usecase which is a bit unusual. I have a real world use case with some producers which I want them to be able to add objects into a collection without blocking. The consumer (just one) should block until a certain amount of objects are available in the collection (eg. 500) and then consume them in bulk. While there are less than 500 it should block and wait for the collection to fill. I don't mind if the queue exceeds this value (700, 1000 etc.) for short amount of times. I currently don't seem to find a solution to fix this exact problem. I was thinking about using

How to limit BlockingCollection size but keep adding new itens (.NET limited size FIFO)?

喜夏-厌秋 提交于 2019-12-04 16:48:43
I want to limit the size of the BlockingCollection. If I want to add another item and the collection is full, the oldest must be removed. Is there some Class specific to this task or my solution is ok? BlockingCollection<string> collection = new BlockingCollection<string>(10); string newString = ""; //Not an elegant solution? if (collection.Count == collection.BoundedCapacity) { string dummy; collection.TryTake(out dummy); } collection.Add(newString); EDIT1: Similar question here: ThreadSafe FIFO List with Automatic Size Limit Management What you are describing is a LRU cache. There is no