concurrency

Single transaction across multiple threads solution

时光总嘲笑我的痴心妄想 提交于 2020-01-12 07:12:25
问题 As I understand it, all transactions are Thread-bound (i.e. with the context stored in ThreadLocal). For example if: I start a transaction in a transactional parent method Make database insert #1 in an asynchronous call Make database insert #2 in another asynchronous call Then that will yield two different transactions (one for each insert) even though they shared the same "transactional" parent. For example, let's say I perform two inserts (and using a very simple sample, i.e. not using an

What is the Scala equivalent of Clojure's Atom?

↘锁芯ラ 提交于 2020-01-12 07:01:28
问题 Clojure has an Atom for changing state between threads in a synchronous and independent manner, that is not part of the STM. You use it like this: user=> (def my-atom (atom 0)) #'user/my-atom user=> @my-atom 0 user=> (swap! my-atom inc) 1 user=> @my-atom 1 user=> (swap! my-atom (fn [n] (* (+ n n) 2))) 4 My question is: What is the Scala equivalent of Clojure's Atom? 回答1: As @Shepmaster and @om-nom-nom said, it's a wrapper around java.util.concurrent.atomic.Atomic... . An equivalent wrapper

Can Haskell's Control.Concurrent.Async.mapConcurrently have a limit?

廉价感情. 提交于 2020-01-12 03:23:10
问题 I'm attempting to run multiple downloads in parallel in Haskell, which I would normally just use the Control.Concurrent.Async.mapConcurrently function for. However, doing so opens ~3000 connections, which causes the web server to reject them all. Is it possible to accomplish the same task as mapConcurrently, but only have a limited number of connections open at a time (i.e. only 2 or 4 at a time)? 回答1: A quick solution would be to use a semaphore to restrict the number of concurrent actions.

Is the volatile keyword required for fields accessed via a ReentrantLock?

为君一笑 提交于 2020-01-12 02:29:29
问题 My question refers to whether or not the use of a ReentrantLock guarantees visibility of a field in the same respect that the synchronized keyword provides. For example, in the following class A , the field sharedData does not need to be declared volatile as the synchronized keyword is used. class A { private double sharedData; public synchronized void method() { double temp = sharedData; temp *= 2.5; sharedData = temp + 1; } } For next example using a ReentrantLock however, is the volatile

java method synchronization and read/write mutual exclusion

女生的网名这么多〃 提交于 2020-01-11 17:39:10
问题 I have two methods read() and write() as below in a class. class Store { public void write() { // write to store; } public string read() { // read from store; } } 1) The Store object is a singleton. 2) I have a Writer class which will write to the store and several Reader classes which will read from the store at the same time. My requirement is that when the writer is writing to the store, all the readers should wait. i.e., when control is in write() , all the calls to read() should be

Multithreaded execution where order of finished Work Items is preserved

試著忘記壹切 提交于 2020-01-11 16:45:09
问题 I have a flow of units of work, lets call them "Work Items" that are processed sequentially (for now). I'd like to speed up processing by doing the work multithreaded. Constraint: Those work items come in a specific order, during processing the order is not relevant - but once processing is finished the order must be restored. Something like this: |.| |.| |4| |3| |2| <- incoming queue |1| / | \ 2 1 3 <- worker threads \ | / |3| |2| <- outgoing queue |1| I would like to solve this problem in

Writing to SQLite dataset when other processes are reading from it

落花浮王杯 提交于 2020-01-11 14:31:10
问题 Reading the SQLite documentation here, when a process wants to write to a SQLite database it obtains a reserved lock. Then once the process is ready to write to disk it obtains a pending lock, during which no new processes can obtain a shared lock, but existing shared locks are allowed to finish their business. Once the remaining shared locks clear, the process can write. However...when I try to write a database while other processes are reading from that database, I just get an immediate

Can I `__restrict__ this` somehow?

三世轮回 提交于 2020-01-11 11:34:11
问题 I've been watching Mike Acton's talk on Data-oriented design in C++ in CppCon 2014, and he gives this example: int Foo::Bar(int count) { int value = 0; for (int i = 0; i < count; i++) { if (m_someDataMemberOfFoo) value++ } return value; } And explains how some compilers keep re-reading m_someDataMemberOfFoo in every iteration, perhaps because its value might change due to concurrent access. Regardless of whether it's appropriate for the compiler to do so - can one tell the compiler to

How does Rust handle killing threads?

痞子三分冷 提交于 2020-01-11 10:41:47
问题 Is there a parent-child connection between threads that are spawned? If I kill the thread from where I spawned other threads, are those going to get killed too? Is this OS specific? 回答1: How does Rust handle killing threads? It doesn't; there is no way to kill a thread. See also: How to terminate or suspend a Rust thread from another thread? How to check if a thread has finished in Rust? Is there a parent-child connection between threads that are spawned? When you spawn a thread, you get a

Why does this code cause data race?

左心房为你撑大大i 提交于 2020-01-11 10:20:11
问题 1 package main 2 3 import "time" 4 5 func main() { 6 m1 := make(map[string]int) 7 m1["hello"] = 1 8 m1["world"] = 2 9 go func() { 10 for i := 0; i < 100000000; i++ { 11 _ = m1["hello"] 12 } 13 }() 14 time.Sleep(100 * time.Millisecond) 15 m2 := make(map[string]int) 16 m2["hello"] = 3 17 m1 = m2 18 } I run command go run --race with this code and get: ================== WARNING: DATA RACE Read at 0x00c420080000 by goroutine 5: runtime.mapaccess1_faststr() /usr/local/go/src/runtime/hashmap_fast