spinlock

Does acquiring a spinlock require compare-and-swap or is swap enough?

亡梦爱人 提交于 2019-12-24 07:37:08
问题 Suppose we have a spinlock implementation: struct Lock { locked : Atomic(bool), } Then an unlock function could be: fun unlock(lock : &Lock) { atomic_store(&lock.locked, false, release); } But what about lock ? Commonly, it uses a compare-and-swap like this: fun lock(lock : &Lock) { while atomic_compare_and_swap(&lock.locked, false, true, acquire) {} } But wouldn't a swap be enough for this? Something like this: fun lock(lock : &Lock) { while atomic_swap(&lock.locked, true, acquire) {} } Is

What is the minimum X86 assembly needed for a spinlock

可紊 提交于 2019-12-22 18:39:08
问题 To implement a spinlock in assembly. Here I post a solution I came up with. Is it correct? Do you know a shorter one? lock: mov ecx, 0 .loop: xchg [eax], ecx cmp ecx, 0 je .loop release: lock dec dword [eax] eax is initialized to -1 (which means lock is free). This should work for many threads (not necessarily 2). 回答1: Shortest would probably be: acquire: lock bts [eax],0 jc acquire release: mov [eax],0 For performance, it's best to use a "test, test and set" approach, and use pause , like

How to implement a spinlock to avoid blocking

我与影子孤独终老i 提交于 2019-12-20 04:36:58
问题 Consider the following code: // Below block executed by thread t1 synchronized(obj) { obj.wait(0); } // This block executed by thread t2 synchronized(obj) { obj.notify(); } I understand that in above code if t1 has taken ownership of synchronized block and at the same time if thread t2 tries to take synchronized block, then t2 goes for a kernel wait. I want to avoid this situation and spin t2 before the block until t1 calls wait and leaves ownership of the block. Is that possible? 回答1: The

Is my spin lock implementation correct and optimal?

ぃ、小莉子 提交于 2019-12-17 23:21:00
问题 I'm using a spin lock to protect a very small critical section. Contention happens very rarely so a spin lock is more appropriate than a regular mutex. My current code is as follows, and assumes x86 and GCC: volatile int exclusion = 0; void lock() { while (__sync_lock_test_and_set(&exclusion, 1)) { // Do nothing. This GCC builtin instruction // ensures memory barrier. } } void unlock() { __sync_synchronize(); // Memory barrier. exclusion = 0; } So I'm wondering: Is this code correct? Does it

Linux Kernel: Spinlock SMP: Why there is a preempt_disable() in spin_lock_irq SMP version?

ぃ、小莉子 提交于 2019-12-14 00:30:38
问题 The original code in Linux kernel is: static inline void __raw_spin_lock_irq(raw_spinlock_t *lock) { local_irq_disable(); preempt_disable(); spin_acquire(&lock->dep_map, 0, 0, _RET_IP_); LOCK_CONTENDED(lock, do_raw_spin_trylock, do_raw_spin_lock); } I think there is no execution path can preempt current path after local IRQ is disabled. Because all common hard IRQs are disabled, there should be no softirq occur and also no tick to kick schedule wheel. I think current path is safe. So why

Is returning while holding a spinlock automatically unsafe?

我的未来我决定 提交于 2019-12-13 14:59:27
问题 The venerated book Linux Driver Development says that The flags argument passed to spin_unlock_irqrestore must be the same variable passed to spin_lock_irqsave . You must also call spin_lock_irqsave and spin_unlock_irqrestore in the same function; otherwise your code may break on some architectures. Yet I can't find any such restriction required by the official documentation bundled with the kernel code itself. And I find driver code that violates this guidance. Obviously it isn't a good idea

Linux Kernel - Can I lock and unlock Spinlock in different functions?

梦想与她 提交于 2019-12-12 14:25:01
问题 I'm new to Kernel programming and programming with locks. Is it safe to lock and unlock a spinlock in different functions? I am doing this to synchronize the code flow. Also, is it safe to use spinlock (lock & unlock) in __schedule()? Is it safe to keep the scheduler waiting to acquire a lock? Thanks in advance. 回答1: Instead of spinlock , you can use a semaphore or a mutex . You should use spinlock in the same function for the littlest set of operations. 回答2: A good reason of NOT using

How spinlock prevents the process to be interrupted?

柔情痞子 提交于 2019-12-12 01:21:57
问题 I read an answer on this site says the spin-lock reduce the overhead with context switches, and after that I read an textbook statement related to this: Spin-lock makes a busy waiting program not be interrupted. My question is on the title. Since the book uses while-loop to indicate the implementation of the spin part of a spin-lock, the following is my reasoning trying to explain myself with this consideration. This sounds like if there is a program with a busy waiting while loop then all

Slow communication using shared memory between user mode and kernel

不打扰是莪最后的温柔 提交于 2019-12-11 19:35:37
问题 I am running a thread in the Windows kernel communicating with an application over shared memory. Everything is working fine except the communication is slow due to a Sleep loop. I have been investigating spin locks, mutexes and interlocked but can't really figure this one out. I have also considered Windows events but don't know about the performance of that one. Please advice on what would be a faster solution keeping the communication over shared memory possibly suggesting Windows events.

Synchronisation between process context and timer function

安稳与你 提交于 2019-12-11 04:42:07
问题 I want to update a data structure atomically in both process context (in queuecommand function to be specific) and timer function. In process context, should I use spin_lock_bh or spin_lock_irq or just spin_lock ? As per my understanding, we should use spin_lock_bh in queuecommand (process context) and just spin_lock in timer function. Am I correct? 回答1: If I understand correctly, it is about timer_list (bottom half context). Then your assumption is correct: yes, it would be sufficient to use