spinlock

Why linux disables kernel preemption after the kernel code holds a spinlock?

时间秒杀一切 提交于 2019-12-05 00:52:59
问题 I am new to Linux and am reading Linux device drivers book by Rubini & Corbet. I am confused at one statement related to spinlocks ; the book states If a nonpreemptive uniprocessor system ever went into a spin on a lock, it would spin forever; no other thread would ever be able to obtain the CPU to release the lock. For this reason, spinlock operations on uniprocessor systems without preemption enabled are optimized to do nothing, with the exception of the ones that change the IRQ masking

Spinlock vs Busy wait

一曲冷凌霜 提交于 2019-12-04 17:53:29
问题 Please explain why Busy Waiting is generally frowned upon whereas Spinning is often seen as okay. As far as I can tell, they both loop infinitely until some condition is met. 回答1: A spin-lock is usually used when there is low contention for the resource and the CPU will therefore only make a few iterations before it can move on to do productive work. However, library implementations of locking functionality often use a spin-lock followed by a regular lock. The regular lock is used if the

Is memory barrier or atomic operation required in a busy-wait loop?

▼魔方 西西 提交于 2019-12-04 16:37:21
问题 Consider the following spin_lock() implementation, originally from this answer: void spin_lock(volatile bool* lock) { for (;;) { // inserts an acquire memory barrier and a compiler barrier if (!__atomic_test_and_set(lock, __ATOMIC_ACQUIRE)) return; while (*lock) // no barriers; is it OK? cpu_relax(); } } What I already know: volatile prevents compiler from optimizing out *lock re-read on each iteration of the while loop; volatile inserts neither memory nor compiler barriers; such an

Alternative to spinlock

拜拜、爱过 提交于 2019-12-04 16:01:32
问题 I am using the following spinlock approach: while(!hasPerformedAction()){ //wait for the user to perform the action //can add timer here too } setHasPerformedAction(false); return getActionPerfomed(); this basically waits for a user to perform an action and then returns it. Currently something requests an answer from the user before continuing, this is why I wait until input is received. However I was wondering if this is inefficient and if we are waiting for a while (i.e. <= 30 secs) will it

Why does this code deadlock?

烈酒焚心 提交于 2019-12-04 14:38:59
I created 2 Linux kernel threads in my loadable module and I bind them to separate CPU cores running on a dual core Android device. After I run this few times, I noticed that the device reboots with a HW watchdog timer reset. I hit the issue consistently. What could be causing the deadlock? Basically, what i need to do is, make sure both the threads run do_something() at the same time on different cores without anybody stealing the cpu cycles(i.e. interrupts are disabled). I am using a spinlock and a volatile variable for this. I also have a semaphore for parent thread to wait on child thread.

Cross-platform and cross-process atomic int writes on file

删除回忆录丶 提交于 2019-12-03 22:22:59
问题 I'm writing an application that will have to be able to handle many concurrent accesses to it, either by threads as by processes. So no mutex'es or locks should be applied to this. To make the use of locks go down to a minimum, I'm designing for the file to be "append-only", so all data is first appended to disk, and then the address pointing to the info it has updated, is changed to refer to the new one. So I will need to implement a small lock system only to change this one int so it refers

onSpinWait​() method of Thread class

南笙酒味 提交于 2019-12-03 22:08:56
While learning Java 9 I came across a new method of Thread class, called onSpinWait​ . According to javadocs, this method is used for this: Indicates that the caller is momentarily unable to progress, until the occurrence of one or more actions on the part of other activities. Can someone help me understand this method giving a real-life example? It's the same (and probably compiles to) as the x86 opcode PAUSE and equivalent the Win32 macro YieldProcessor , GCC's __mm_pause() and the C# method Thread.SpinWait It's a very weakened form of yielding: it tells your CPU that you are in a loop that

Is there any simple way to improve performance of this spinlock function?

久未见 提交于 2019-12-03 21:02:06
I'm trying to implement a spinlock in my code but the spinlock that I implemented based on Wikipedia results in extremely slow performance. int lockValue = 0; void lock() { __asm__("loop: \n\t" "movl $1, %eax \n\t" "xchg %eax, lockValue \n\t" "test %eax, %eax \n\t" "jnz loop"); } Is there any way of improving this to make it faster? Thanks. How about something like this (I understand this is the KeAcquireSpinLock implementation). My at&t assembly is weak unfortunately. spin_lock: rep; nop test lockValue, 1 jnz spin_lock lock bts lockValue jc spin_lock "movl $1,%%edx \n\t" // edx = 1; ".set

Why linux disables kernel preemption after the kernel code holds a spinlock?

混江龙づ霸主 提交于 2019-12-03 17:21:28
I am new to Linux and am reading Linux device drivers book by Rubini & Corbet. I am confused at one statement related to spinlocks ; the book states If a nonpreemptive uniprocessor system ever went into a spin on a lock, it would spin forever; no other thread would ever be able to obtain the CPU to release the lock. For this reason, spinlock operations on uniprocessor systems without preemption enabled are optimized to do nothing, with the exception of the ones that change the IRQ masking status. Further the book states The kernel preemption case is handled by the spinlock code itself. Any

Is memory barrier or atomic operation required in a busy-wait loop?

狂风中的少年 提交于 2019-12-03 10:46:28
Consider the following spin_lock() implementation, originally from this answer : void spin_lock(volatile bool* lock) { for (;;) { // inserts an acquire memory barrier and a compiler barrier if (!__atomic_test_and_set(lock, __ATOMIC_ACQUIRE)) return; while (*lock) // no barriers; is it OK? cpu_relax(); } } What I already know: volatile prevents compiler from optimizing out *lock re-read on each iteration of the while loop; volatile inserts neither memory nor compiler barriers ; such an implementation actually works in GCC for x86 (e.g. in Linux kernel) and some other architectures; at least one