mutex

Mutex locks - where the sets could have been built by merging

…衆ロ難τιáo~ 提交于 2019-12-11 09:30:11
问题 From here: https://stackoverflow.com/a/5524120/462608 If you want to lock several mutex-protected objects from a set of such objects, where the sets could have been built by merging , you can choose to use per object exactly one mutex, allowing more threads to work in parallel, or to use per object one reference to any possibly shared recursive mutex, to lower the probability of failing to lock all mutexes together, or to use per object one comparable reference to any possibly shared non

Mutual exclusion thread locking, with dropping of queued functions upon mutex/lock release, in Python?

佐手、 提交于 2019-12-11 09:07:47
问题 This is the problem I have: I'm using Python 2.7, and I have a code which runs in a thread, which has a critical region that only one thread should execute at the time. That code currently has no mutex mechanisms, so I wanted to inquire what I could use for my specific use case, which involves "dropping" of "queued" functions. I've tried to simulate that behavior with the following minimal working example: useThreading=False # True if useThreading: from threading import Thread, Lock else:

Avoid taking a long time to finish the 'too much milk' scenario

三世轮回 提交于 2019-12-11 08:52:52
问题 The following is a simple solution to the 'too much milk problem' lock mutex; while (1){ lock_acquire(mutex); if (no milk) go and buy milk;//action-1 lock_release(mutex); } The problem is that, action-1 can take a lot of time to accomplish, making any of the processes waiting to acquire the mutex to wait for a long time. One way to avoid this is to have a timer so that the process buying milk will return with or without milk once the timer goes off. As you can see, there are problems with

Are “benaphores” worth implementing on modern OS's?

。_饼干妹妹 提交于 2019-12-11 08:49:46
问题 Back in my days as a BeOS programmer, I read this article by Benoit Schillings, describing how to create a "benaphore": a method of using atomic variable to enforce a critical section that avoids the need acquire/release a mutex in the common (no-contention) case. I thought that was rather clever, and it seems like you could do the same trick on any platform that supports atomic-increment/decrement. On the other hand, this looks like something that could just as easily be included in the

mutable boost::mutex is it possible to separate lock and wait functions?

…衆ロ難τιáo~ 提交于 2019-12-11 08:37:24
问题 So I have functions like read that can be called at the same time from multiple threads. but also I have a function to write that needs to lock all that read functions. Where to get example of creating such archetecture? I get that we can have: mutable boost::mutex the_read_mutex; mutable boost::mutex the_write_mutex; and: void write() { // make all new readers wait and wait for all other currently running read threads(); } void read() { // do not make all new readers wait, and wait for all

Pthread mutex per thread group

我们两清 提交于 2019-12-11 07:55:52
问题 I am looking for the right solution to protect thread group as I normally would do with a single thread, that is: threads 1 and 2 either or both can lock mutex M at the same time, neither 1 nor 2 be put to sleep. Mutex M stands against thread 3. Thus, if thread 3 locks the mutex while it's locked by either thread 1 or 2 or both, then thread 3 IS put to sleep. If thread 1 or 2 locks the mutex while it's locked by thread 3, then 1 or 2 (whichever locking it) also put to sleep until 3 releases

Why does Monitor.Pulse need locked mutex? (.Net)

妖精的绣舞 提交于 2019-12-11 07:27:56
问题 Monitor.Pulse and PulseAll requires that the lock it operates on is locked at the time of call. This requirement seems unnecessary and detrimental for performance. My first idea was that this results in 2 wasted context switches, but this was corrected by nobugz below (thanks). I am still unsure whether it involves a potential for wasted context switches, as the other thread(s) which were waiting on the monitor are already available for the sheduler, but if they are scheduled, they will only

Buffer in Rust with Mutex and Condvar

怎甘沉沦 提交于 2019-12-11 06:35:08
问题 I'm trying to implement a buffer with a single consumer and a single producer. I have only used POSIX Semaphores, however, they're not available in Rust and I'm trying to implement a trivial semaphore problem with Rust sync primitives ( Mutex , Condvar , Barrier , ...) but I don't want to use channels. My code behaves too irregularly, with some cases going well and other times it just stops at some number and in other cases it just doesn't start counting. Things appear to work better if I

boost::details::pool::pthread_mutex and boost::details::pool::null_mutex

笑着哭i 提交于 2019-12-11 06:19:38
问题 What is the differance between boost::details::pool::pthread_mutex and boost::details::pool::null_mutex . I see that in latest boost version - 1.42, the class boost::details::pool::pthread_mutex was deleted. What should I use instead? 回答1: boost::details::pool::null_mutex is a mutex that does nothing (a lock always succeeds immediately). It's appropriate when you're not using threads. The Boost pool library selects what kind of mutex it will use to synchronize access to critical sections with

Access std::deque from 3 threads

核能气质少年 提交于 2019-12-11 06:16:44
问题 I have a buffer of type std::deque. There is a thread to write into it, another one to read from it, and the last one to handle some conditions for which item in the buffer to forward. I just want to access this buffer safely from the 3 threads. Yup, I'm very beginner :-) I created a mutex and every time I access the buffer I wrap this access by myMutex. lock() ; // access here myMutex. unlock() ; Also, I used std::thread myThread(this, &fn) to create the threads. And I call this_thread: