atomic

Ring buffer with atomic indexes

半世苍凉 提交于 2021-02-19 08:45:38
问题 I have struggled with what must be a fundamental misunderstanding of how atomics work in C++. I have written the code below to implement a fast ring buffer using atomic variables for indexes so multiple threads can write to and read from the buffer. I've whittled the code down to this simple case (which I realize is still a little long. Sorry.). If I run this on either Linux or Mac OS X, it will work some of the time, but it will also throw exceptions at least 10% of the time. It also seems

Java并发编程面试题(2020最新版)

蹲街弑〆低调 提交于 2021-02-19 03:59:37
基础知识 并发编程的优缺点 为什么要使用并发编程(并发编程的优点) 充分利用多核CPU的计算能力:通过并发编程的形式可以将多核CPU的计算能力发挥到极致,性能得到提升 方便进行业务拆分,提升系统并发能力和性能:在特殊的业务场景下,先天的就适合于并发编程。现在的系统动不动就要求百万级甚至千万级的并发量,而多线程并发编程正是开发高并发系统的基础,利用好多线程机制可以大大提高系统整体的并发能力以及性能。面对复杂业务模型,并行程序会比串行程序更适应业务需求,而并发编程更能吻合这种业务拆分 。 并发编程有什么缺点 并发编程的目的就是为了能提高程序的执行效率,提高程序运行速度,但是并发编程并不总是能提高程序运行速度的,而且并发编程可能会遇到很多问题,比如**:内存泄漏、上下文切换、线程安全、死锁**等问题。 并发编程三要素是什么?在 Java 程序中怎么保证多线程的运行安全? 并发编程三要素(线程的安全性问题体现在): 原子性:原子,即一个不可再被分割的颗粒。原子性指的是一个或多个操作要么全部执行成功要么全部执行失败。 可见性:一个线程对共享变量的修改,另一个线程能够立刻看到。(synchronized,volatile) 有序性:程序执行的顺序按照代码的先后顺序执行。(处理器可能会对指令进行重排序) 出现线程安全问题的原因: 线程切换带来的原子性问题 缓存导致的可见性问题

Why does GCC use mov/mfence instead of xchg to implement C11's atomic_store?

那年仲夏 提交于 2021-02-18 20:56:35
问题 In C++ and Beyond 2012: Herb Sutter - atomic<> Weapons, 2 of 2 Herb Sutter argues (around 0:38:20) that one should use xchg , not mov / mfence to implement atomic_store on x86. He also seems to suggest that this particular instruction sequence is what everyone agreed one. However, GCC uses the latter. Why does GCC use this particular implementation? 回答1: Quite simply, the mov and mfence method is faster as it does not trigger a redundant memory read like the xchg which will take time. The x86

Race condition occurring during use of Parallel Streams and Atomic Variables

不羁岁月 提交于 2021-02-18 18:01:14
问题 When the following piece of code is getting executed I am getting exceptions in a random manner. byte[][] loremIpsumContentArray = new byte[64][]; for (int i = 0; i < loremIpsumContentArray.length; i++) { random.nextBytes(loremIpsumContentArray[i] = new byte[CONTENT_SIZE]); } AtomicBoolean aBoolean = new AtomicBoolean(true); List<Long> resultList = IntStream.range(0, 64* 2) .parallel() .mapToObj(i -> getResult(i, aBoolean, repositoryPath, loremIpsumContentArray )) .collect(Collectors.toList()

Race condition occurring during use of Parallel Streams and Atomic Variables

孤人 提交于 2021-02-18 17:59:25
问题 When the following piece of code is getting executed I am getting exceptions in a random manner. byte[][] loremIpsumContentArray = new byte[64][]; for (int i = 0; i < loremIpsumContentArray.length; i++) { random.nextBytes(loremIpsumContentArray[i] = new byte[CONTENT_SIZE]); } AtomicBoolean aBoolean = new AtomicBoolean(true); List<Long> resultList = IntStream.range(0, 64* 2) .parallel() .mapToObj(i -> getResult(i, aBoolean, repositoryPath, loremIpsumContentArray )) .collect(Collectors.toList()

confused about atomic class: memory_order_relaxed

这一生的挚爱 提交于 2021-02-18 11:41:14
问题 I am studying this site: https://gcc.gnu.org/wiki/Atomic/GCCMM/AtomicSync, which is very helpful to understand the topic about atomic class. But this example about relaxed mode is hard to understand: /*Thread 1:*/ y.store (20, memory_order_relaxed) x.store (10, memory_order_relaxed) /*Thread 2*/ if (x.load (memory_order_relaxed) == 10) { assert (y.load(memory_order_relaxed) == 20) /* assert A */ y.store (10, memory_order_relaxed) } /*Thread 3*/ if (y.load (memory_order_relaxed) == 10) assert

Why is compare-and-swap (CAS) algorithm a good choice for lock-free synchronization?

筅森魡賤 提交于 2021-02-18 11:22:21
问题 CAS belongs to the read-modify-write (RMW) family, a set of algorithms that allow you to perform complex transactions atomically. Specifically, Wikipedia says that CAS is used to implement synchronization primitives like semaphores and mutexes, as well as more sophisticated lock-free and wait-free algorithms. [...] CAS can implement more of these algorithms than atomic read, write, or fetch-and-add, and assuming a fairly large amount of memory, [...] it can implement all of them. https://en

Using an atomic read-modify-write operation in a release sequence

☆樱花仙子☆ 提交于 2021-02-17 21:49:06
问题 Say, I create an object of type Foo in thread #1 and want to be able to access it in thread #3. I can try something like: std::atomic<int> sync{10}; Foo *fp; // thread 1: modifies sync: 10 -> 11 fp = new Foo; sync.store(11, std::memory_order_release); // thread 2a: modifies sync: 11 -> 12 while (sync.load(std::memory_order_relaxed) != 11); sync.store(12, std::memory_order_relaxed); // thread 3 while (sync.load(std::memory_order_acquire) != 12); fp->do_something(); The store/release in thread

Why is reference assignment atomic in Java?

拟墨画扇 提交于 2021-02-17 19:13:38
问题 As far as I know reference assignment is atomic in a 64 bit JVM. Now, I assume the jvm doesn't use atomic pointers internally to model this, since otherwise there would be no need for Atomic References. So my questions are: Is atomic reference assignment in the "specs" of java/Scala and guaranteed to happen or is it just a happy coincidence that it is that way most times ? Is atomic reference assignment implied for any language that compiles to the JVM's bytecode (e.g. clojure, Groovy, JRuby,

Why is reference assignment atomic in Java?

Deadly 提交于 2021-02-17 19:11:30
问题 As far as I know reference assignment is atomic in a 64 bit JVM. Now, I assume the jvm doesn't use atomic pointers internally to model this, since otherwise there would be no need for Atomic References. So my questions are: Is atomic reference assignment in the "specs" of java/Scala and guaranteed to happen or is it just a happy coincidence that it is that way most times ? Is atomic reference assignment implied for any language that compiles to the JVM's bytecode (e.g. clojure, Groovy, JRuby,