Two waiting threads (producer/consumer) with a shared buffer

余生长醉 提交于 2021-02-07 20:28:44

问题


I am trying to have a bunch of producer threads that wait untill the buffer has room for an item, then it puts items in the buffer while it can, going back to sleep if there is no more room.

At the same time there should be a bunch of consumer threads that wait untill there is something in the buffer, then it takes things from buffer while it can, going back to sleep if it's empty.

In pseudo code, here's what Iam doing, but all Iam getting is deadlocks.

condition_variable cvAdd;
condition_variable cvTake;
mutex smtx;

ProducerThread(){
    while(has something to produce){

         unique_lock<mutex> lock(smtx);
         while(buffer is full){
            cvAdd.wait(lock);
         }
         AddStuffToBuffer();
         cvTake.notify_one();
    }
}

ConsumerThread(){

     while(should be taking data){

        unique_lock<mutex> lock(smtx);
        while( buffer is empty ){
            cvTake.wait(lock);
        }   
        TakeStuffFromBuffer();
        if(BufferIsEmpty)
        cvAdd.notify_one();
     }

}

回答1:


One other error worth mentioning is that your consumers only notify the waiting producers when the buffer becomes empty.

The optimal ways to notify consumers is only when the queue was full.

E.g.:

template<class T, size_t MaxQueueSize>
class Queue
{
    std::condition_variable consumer_, producer_;
    std::mutex mutex_;
    using unique_lock = std::unique_lock<std::mutex>;

    std::queue<T> queue_;

public:
    template<class U>
    void push_back(U&& item) {
        unique_lock lock(mutex_);
        while(MaxQueueSize == queue_.size())
            producer_.wait(lock);
        queue_.push(std::forward<U>(item));
        consumer_.notify_one();
    }

    T pop_front() {
        unique_lock lock(mutex_);
        while(queue_.empty())
            consumer_.wait(lock);
        auto full = MaxQueueSize == queue_.size();
        auto item = queue_.front();
        queue_.pop();
        if(full)
            producer_.notify_all();
        return item;
    }
};



回答2:


Your producer and consumer both attempt to lock the mutex, but neither thread unlocks the mutex. This means that the first thread to acquire the lock holds it and the other thread never runs.

Consider moving your mutex lock calls until just before each thread performs its action, then unlock after each thread performs its action (AddStuffTobuffer() or TakeStuffFromBuffer()).




回答3:


See this example based on your query. A single condition_variable should suffice in this case.

#include "conio.h"
#include <thread>
#include <mutex>
#include <queue>
#include <chrono>
#include <iostream>
#include <condition_variable>

using namespace std;

mutex smtx;
condition_variable cvAdd;
bool running ;
queue<int> buffer;

void ProducerThread(){
    static int data = 0;
    while(running){
        unique_lock<mutex> lock(smtx);
        if( !running) return;
        buffer.push(data++);
        lock.unlock();
        cvAdd.notify_one();
        this_thread::sleep_for(chrono::milliseconds(300));
    }
}

void ConsumerThread(){

     while(running){

        unique_lock<mutex> lock(smtx);
        cvAdd.wait(lock,[](){ return !running || !buffer.empty(); });
         if( !running) return;
        while( !buffer.empty() )
        {
            auto data = buffer.front();
            buffer.pop();
            cout << data <<" \n";

            this_thread::sleep_for(chrono::milliseconds(300)); 
        }                

     }

}

int main()
{
    running = true;
    thread producer = thread([](){ ProducerThread(); }); 
    thread consumer = thread([](){ ConsumerThread(); });

    while(!getch())
    { }    

    running = false;
    producer.join();
    consumer.join();  
}



回答4:


I had previously answer this question, but I was a bit off topic as I'm currently getting a grasp on the underlying mechanics and behavior of mutex, lock_guard etc. I've been watching a few videos on the subject and one the videos that I currently was watching was actually the opposite of locking as the video showed how to implement a LockFreeQueue that used a circular buffer or ring buffer, two pointers, and used atomic instead of mutex. Now, for your current situation, atomic and a LockFreeQueue will not work to answer your question, but what I've gained from that video was the idea of a circular buffer.

Since both of your producer / consumer threads will be sharing the same memory pool. If you have a 1 to 1 ratio of producer - consumer thread it is quite easily to keep track of each index into the array or each pointer. However, when you have many to many, things do tend to get a bit complicated.

One of the things that can be done is that if you limit the size of your buffer to N objects you actually might want to create it to be N+1. An extra empty space, that will help to alleviate some of the complexities in the structure of a ring buffer that is shared amongst multiple producers and consumers.


Take the illustrations below:

p = index of producer and c = index of consumer and N represents the amount of [ ] index spaces. N = 5.

One to One

 p                N = 5
[ ][ ][ ][ ][ ]
 c

Here both p and c == 0. This represents the buffer is empty. Let's say producer fills buffer before c receives anything

             p    N = 5
[x][x][x][x][x]
 c

Here in this situation the buffer is full and p has to wait for an empty space. c is now able to acquire.

             p     N = 5
[ ][x][x][x][x]
    c         

Here c acquired object at [0] and it advanced its index to 1. P is now able to circle around the ring buffer.

This is easy to keep track of with a single p & c. Now lets explore with multiple consumers and a single producer

One to Many

 p                 N = 5
[ ][ ][ ][ ][ ]
c1
c2

Here p index = 0, c1 & c2 index = 0, ring buffer is empty

             p     N = 5
[x][x][x][x][x]
c1
c2

Now p has to wait for either c1 or c2 to acquire item at [0] before it can write

             p     N = 5
[ ][ ][x][x][x]
    c1 c2

Here it's not obvious if c1 or c2 acquired [0] or 1, but both have successfully acquired an item. Both have incremented the index counter. The above appears to show that c1 incremented from [0] to 1. Then c2 which was at [0] has to increment index counter as well but it was already changed from 0 to 1 so c2 incremented it to 2.

There is a dead lock situation here if we are to assume that when p == 0 && c1 || c2 == 0 that the buffer is empty. Look at this situation here.

 p               N = 5  // P hasn't written yet but has advanced 
[ ][ ][ ][ ][x]  // 1 Item is left
           c1  // Both c1 & c2 have index to same item.
           c2  // c1 acquires it and so does c2 but one of them finishes first and then increments the counter. Now the buffer is empty and looks like this:

 p                N = 5
[ ][ ][ ][ ][ ]
c1          c2    // p index = 0 and c1 = 0 represents empty buffer.
                  // c2 is trying to read [4]

This can lead to a dead lock.

Many to One

 p1
 p2                N = 5
[ ][ ][ ][ ][ ]
 c1

Here you have multiple producers that can write to the buffer for a single consumer. If they interleave:

p1 writes to [0] increments counter
p2 writes to [0] increments counter

   p1 p2
[x][ ][ ][ ][ ]
c1

This will cause an empty space in the buffer. Producers interfere with each other. You need mutual exclusion here.

With the idea of many to many; you need to take into consideration and combine both features above of one to many and many to one. You would need a mutex for your consumers and a mutex for your producers, trying to use the same mutex for both will give you problems that can cause unforeseen deadlocks. You have to make sure that all cases are checked for and that you are locking them at the appropriate times - places. Maybe these few videos will help you understand a bit more.

  • Processes, Synchronization & Deadlock
  • LockFreeQueue

Pseudo Code: Might look like this:

condition_variable cvAdd;
condition_variable cvTake;
mutex consumerMutex;
mutex producerMutex;

ProducerThread(){
    while( has something to produce ) {    
         unique_lock<mutex> lock(producerMutex);
         while(buffer is full){
            cvAdd.wait(lock);
         }
         AddStuffToBuffer();
         cvTake.notify_one();
    }
}

ConsumerThread() {    
     while( should be taking data ) {    
        unique_lock<mutex> lock(consumerMutex);
        while( buffer is empty ){
            cvTake.wait(lock);
        }   
        TakeStuffFromBuffer();
        if(BufferIsEmpty)
        cvAdd.notify_one();
     }    
}

The only difference here is that 2 exclusive mutexes are being used instead of both producer and consumer trying to use the same mutex. It is the memory that is being shared; but you don't want to share the counters or pointers into the memory pool between the two. It is okay for multiple producers to use the same mutex, and it is okay for multiple consumers to use the same mutex, but having both consumers and producers using the same mutex might be your underlying issue.



来源:https://stackoverflow.com/questions/49640527/two-waiting-threads-producer-consumer-with-a-shared-buffer

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!