Why is std::mutex taking a long, highly irregular amount of time to be shared?

被刻印的时光 ゝ 提交于 2019-12-30 11:03:50

问题


This code demonstrates that the mutex is being shared between two threads, but one thread has it nearly all of the time.

#include <thread>
#include <mutex>
#include <iostream>

#include <unistd.h>

int main ()
{
    std::mutex m;

    std::thread t ([&] ()
    {
        while (true)
        {
            {
                std::lock_guard <std::mutex> thread_lock (m);

                sleep (1); // or whatever
            }
            std::cerr << "#";
            std::cerr.flush ();
        }
    });

    while (true)
    {
        std::lock_guard <std::mutex> main_lock (m);
        std::cerr << ".";
        std::cerr.flush ();
    }
}

Compiled with g++ 7.3.0 on Ubuntu 18.04 4.15.0-23-generic.

The output is a mix of both # and . characters, showing that the mutex is being shared, but the pattern is surprising. Typically something like this:

.......#####..........................##################......................##

i.e. the thread_lock locks the mutex for a very long time. After several or even tens of seconds, the main_lock receives control (briefly) then the thread_lock gets it back and keeps it for ages. Calling std::this_thread::yield() doesn't change anything.

Why are the two mutexes not equally likely to gain the lock, and how can I make the mutex be shared in a balanced fashion?


回答1:


std::mutex isn't designed to be fair. It doesn't guarantee that the order of locking is kept, you're either lucky to get the lock or not.

If you want more fairness, consider using a std::condition_variable like so :

#include <thread>
#include <mutex>
#include <iostream>
#include <condition_variable>

#include <unistd.h>

int main ()
{
    std::mutex m;
    std::condition_variable cv;
    std::thread t ([&] ()
    {
        while (true)
        {
            std::unique_lock<std::mutex> lk(m);
            std::cerr << "#";
            std::cerr.flush ();
            cv.notify_one();
            cv.wait(lk);
        }
    });

    while (true)
    {
        std::unique_lock<std::mutex> lk(m);
        std::cerr << ".";
        std::cerr.flush ();
        cv.notify_one();
        cv.wait(lk);
    }
}



回答2:


Making std::mutex fair would have a cost. And in C++ you don't pay for what you don't ask for.

You could write a locking object where the party releasing the lock cannot be the next one to get it. More advanced, you could write one where this only occurs if someone else is waiting.

Here is a quick, untested stab at a fair mutex:

struct fair_mutex {
  void lock() {
    auto l = internal_lock();
    lock(l);
  }
  void unlock() {
    auto l = internal_lock();
    in_use = false;
    if (waiting != 0) {
      loser=std::this_thread::get_id();
    } else {
      loser = {};
    }
    cv.notify_one();
  }
  bool try_lock() {
    auto l = internal_lock();
    if (in_use) return false;
    lock(l);
    return true;
  }
private:
  void lock(std::unique_lock<std::mutex>&l) {
    ++waiting;
    cv.wait( l, [&]{ return !in_use && std::this_thread::get_id() != loser; } );
    in_use = true;
    --waiting;
  }
  std::unique_lock<std::mutex> internal_lock() const {
    return std::unique_lock<std::mutex>(m);
  }
  mutable std::mutex m;
  std::condition_variable cv;
  std::thread::id loser;
  bool in_use = false;
  std::size_t waiting = 0;
};

it is "fair" in that if you have two threads contending over a resource, they will take turns. If someone is waiting on a lock, anyone giving up the lock won't grab it again.

This is, however, threading code. So I might read it over, but I wouldn't trust my first attempt to write anything.

You could extend this (at increasing cost) to be n-way fair (or even omega-fair) where if there are up to N elements waiting, they all get their turn before the releasing thread gets another chance.



来源:https://stackoverflow.com/questions/53068526/why-is-stdmutex-taking-a-long-highly-irregular-amount-of-time-to-be-shared

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!