Is there a way to ensure that blocked threads get woken up in the same order as they got blocked? I read somewhere that this would be called a "strong lock" but I found no resources on that.
On Mac OS X one can design a FIFO queue that stores all the thread ids of the blocked threads and then use the nifty function pthread_cond_signal_thread_np()
to wake up one specific thread - which is obviously non-standard and non-portable.
One way I can think of is to use a similar queue and at the unlock()
point send a broadcast()
to all threads and have them check which one is the next in line.
But this would induce lots of overhead.
A way around the problem would be to issue packaged_task's to the queue and have it process them in order. But that seems more like a workaround to me than a solution.
Edit:
As pointed out by the comments, this question may sound irrelevant, since there is in principle no guaranteed ordering of locking attempts.
As a clarification:
I have something I call a ConditionLockQueue which is very similar to the NSConditionLock class in the Cocoa library, but it maintains a FIFO queue of blocked threads instead of a more-or-less random pool.
Essentially any thread can "line up" (with or without the requirement of a specific 'condition' - a simple integer value - to be met). The thread is then placed on the queue and blocks until it is the frontmost element in the queue whose condition is met.
This provides a very flexible way of synchronization and I have found it very helpful in my program.
Now what I really would need is a way to wake up a specific thread with a specific id.
But these problems are almost alike.
Building on @Chris Dodd's answer, using a queue of condition variables instead of tickets
#include <deque>
#include <mutex>
#include <condition_variable>
class ordered_lock {
std::mutex q_lock;
std::deque<std::condition_variable*> q;
std::condition_variable q_empty;
public:
void lock() {
std::unique_lock<std::mutex> acquire(q_lock);
std::condition_variable cv;
q.push_back(&cv);
q_empty.notify_one();
cv.wait(acquire);
}
void unlock() {
std::unique_lock<std::mutex> acquire(q_lock);
while (q.size() == 0)
q_empty.wait(acquire);
std::condition_variable *cv = q.front();
q.pop_front();
cv->notify_one();
}
int size() {
std::unique_lock<std::mutex> acquire(q_lock);
return q.size();
}
};
This let's a thread wait on its individual condition variable. Since there's only one thread waiting on any condition variable, it is easy to pick a specific thread for wake up.
Although, I don't know, if this scales and if there's a limit on the maximum number of condition variables in a single program.
To address the issues raised, I wrote a small test program
#include <thread>
ordered_lock g;
void t()
{
g.lock();
}
int main(int argc, char **argv)
{
std::thread t1(t);
std::thread t2(t);
std::thread t3(t);
while (g.size() < 3) {
// busy waiting for the threads to lock in any order
}
while (g.size() > 0) {
// unlock threads in lock order
g.unlock();
}
t1.join();
t2.join();
t3.join();
return 0;
}
With a few debug print statements in lock()
, everybody can see, it doesn't work, although not for the reasons stated.
It does not work, because of OS scheduling policies. When we assume lock order t1
, t2
, t3
, it might (and does) happen, that the main thread unlocks t1
and t2
, then t2
is rescheduled, ...
main: g.unlock() // t1
main: g.unlock() // t2
t2: wakeup
main: g.unlock() // t3
t1: wakeup
t3: wakeup
or in any other order, even the desired one
main: g.unlock() // t1
t1: wakeup
main: g.unlock() // t2
t2: wakeup
main: g.unlock() // t3
t3: wakeup
Fortunately, we can fix ordered_lock
and force the desired order by moving q.pop_front()
into the lock()
method
void lock() {
std::unique_lock<std::mutex> acquire(q_lock);
std::condition_variable cv;
q.push_back(&cv);
q_empty.notify_one();
cv.wait(acquire);
q.pop_front();
}
void unlock() {
std::unique_lock<std::mutex> acquire(q_lock);
while (q.size() == 0)
q_empty.wait(acquire);
std::condition_variable *cv = q.front();
cv->notify_one();
}
Removing the condition variable from the queue inside lock()
ensures that no other thread can be woken before the front thread is running.
But this has other consequences, of course.
- Depending on the program, there might be many more
unlock
s than previouslock
s. - Unless the work is done inside
lock
(or before q.pop_front()), there might be a reschedule to another thread, which makes the "fix" moot.
You can take care of these issues too, but for a "simple" answer this is already too long.
Its pretty easy to build a lock object that uses numbered tickets to insure that its completely fair (lock is granted in the order threads first tried to acquire it):
#include <mutex>
#include <condition_variable>
class ordered_lock {
std::condition_variable cvar;
std::mutex cvar_lock;
unsigned int next_ticket, counter;
public:
ordered_lock() : next_ticket(0), counter(0) {}
void lock() {
std::unique_lock<std::mutex> acquire(cvar_lock);
unsigned int ticket = next_ticket++;
while (ticket != counter)
cvar.wait(acquire);
}
void unlock() {
std::unique_lock<std::mutex> acquire(cvar_lock);
counter++;
cvar.notify_all();
}
};
edit
To fix Olaf's suggestion:
#include <mutex>
#include <condition_variable>
#include <queue>
class ordered_lock {
std::queue<std::condition_variable> cvar;
std::mutex cvar_lock;
bool locked;
public:
ordered_lock() : locked(false) {};
void lock() {
std::unique_lock<std::mutex> acquire(cvar_lock);
if (locked) {
cvar.emplace();
cvar.back().wait(acquire);
} else {
locked = true;
}
}
void unlock() {
std::unique_lock<std::mutex> acquire(cvar_lock);
if (cvar.empty()) {
locked = false;
} else {
cvar.front().notify_one();
cvar.pop();
}
}
};
Are we asking the right questions on this thread??? And if so: are they answered correctly???
Or put another way:
Have I completely misunderstood stuff here??
Edit Paragraph: It seems StatementOnOrder (see below) is false. See link1 (C++ threads etc. under Linux are ofen based on pthreads), and link2 (mentions current scheduling policy as the determining factor) -- Thanks to Cubbi from cppreference (ref). See also link, link, link, link. If the statement is false, then the method of pulling an atomic (!) ticket, as shown in the code below, is probably to be preferred!!
Here goes...
StatementOnOrder: "Multiple threads that run into a locked mutex and thus "go to sleep" in a particular order, will afterwards aquire ownership of the mutex and continue on in the same order."
Question: Is StatementOnOrder true or false ???
void myfunction() {
std::lock_guard<std::mutex> lock(mut);
// do something
// ...
// mutex automatically unlocked when leaving funtion.
}
I'm asking this because all code examples on this page to date, seem to be either:
a) a waste (if StatementOnOrder is true)
or
b) seriously wrong (if StatementOnOrder is false).
So why do a say that they might be "seriously wrong", if StatementOnOrder is false?
The reason is that all code examples think they're being super-smart by utilizing std::condition_variable
, but are actually using locks before that, which will (if StatementOnOrder is false) mess up the order!!!
Just search this page for std::unique_lock<std::mutex>
, to see the irony.
So if StatementOnOrder is really false, you cannot run into a lock, and then handle tickets and condition_variables stuff after that. Instead, you'll have to do something like this: pull an atomic ticket before running into any lock!!!
Why pull a ticket, before running into a lock? Because here we're assuming StatementOnOrder to be false, so any ordering has to be done before the "evil" lock.
#include <mutex>
#include <thread>
#include <limits>
#include <atomic>
#include <cassert>
#include <condition_variable>
#include <map>
std::mutex mut;
std::atomic<unsigned> num_atomic{std::numeric_limits<decltype(num_atomic.load())>::max()};
unsigned num_next{0};
std::map<unsigned, std::condition_variable> mapp;
void function() {
unsigned next = ++num_atomic; // pull an atomic ticket
decltype(mapp)::iterator it;
std::unique_lock<std::mutex> lock(mut);
if (next != num_next) {
auto it = mapp.emplace(std::piecewise_construct,
std::forward_as_tuple(next),
std::forward_as_tuple()).first;
it->second.wait(lock);
mapp.erase(it);
}
// THE FUNCTION'S INTENDED WORK IS NOW DONE
// ...
// ...
// THE FUNCTION'S INDENDED WORK IS NOW FINISHED
++num_next;
it = mapp.find(num_next); // this is not necessarily mapp.begin(), since wrap_around occurs on the unsigned
if (it != mapp.end()) {
lock.unlock();
it->second.notify_one();
}
}
The above function guarantees that the order is executed according to the atomic ticket that is pulled. (Edit: using boost's intrusive map, an keeping condition_variable on the stack (as a local variable), would be a nice optimization, which can be used here, to reduce free-store usage!)
But the main question is:
Is StatementOnOrder true or false???
(If it is true, then my code example above is a also waste, and we can just use a mutex and be done with it.)
I wish somebody like Anthony Williams would check out this page... ;)
来源:https://stackoverflow.com/questions/14792016/creating-a-lock-that-preserves-the-order-of-locking-attempts-in-c11