concurrency

Do you need to synchronized reading from HashMap?

一曲冷凌霜 提交于 2020-08-10 08:44:11
问题 I have a java.util.HashMap object. I guarantee that writing to HashMap is done by single dedicated thread. However, reading from the same HashMap object can be done from more that one thread at the time. Can I run in any troubles with such implementation? 回答1: Yes, you can run into big troubles with such an implementation! Adding a value to the HashMap is not an atomic operation. So if you read the map from another thread you might see an inconsistent state when another thread is adding a

Do you need to synchronized reading from HashMap?

点点圈 提交于 2020-08-10 08:37:08
问题 I have a java.util.HashMap object. I guarantee that writing to HashMap is done by single dedicated thread. However, reading from the same HashMap object can be done from more that one thread at the time. Can I run in any troubles with such implementation? 回答1: Yes, you can run into big troubles with such an implementation! Adding a value to the HashMap is not an atomic operation. So if you read the map from another thread you might see an inconsistent state when another thread is adding a

How to design an execution engine for a sequence of tasks

点点圈 提交于 2020-07-31 07:28:50
问题 I am trying to code a problem in Java where I have to execute a bunch of tasks. Problem Execute a job which consists of multiple tasks and those tasks have dependencies among them. A job will have a list of tasks and each such task will further have a list of successor tasks (Each successor task will have its own successor tasks - you can see the recursive nature here). Each successor task can start its execution if - It is configured to be executed on partial execution of its predecessor

Are X86 atomic RMW instructions wait free

社会主义新天地 提交于 2020-07-21 03:42:32
问题 On x86, atomic RMW instructions like lock add dword [rdi], 1 are implemented using cache locking on modern CPUs. So a cache line is locked for duration of the instruction. This is done by getting the line EXCLUSIVE/MODIFIED state when value is read and the CPU will not respond to MESI requests from other CPU's until the instruction is finished. There are 2 flavors of concurrent progress conditions, blocking and non-blocking. Atomic RMW instructions are non-blocking. CPU hardware will never

Are X86 atomic RMW instructions wait free

隐身守侯 提交于 2020-07-21 03:40:46
问题 On x86, atomic RMW instructions like lock add dword [rdi], 1 are implemented using cache locking on modern CPUs. So a cache line is locked for duration of the instruction. This is done by getting the line EXCLUSIVE/MODIFIED state when value is read and the CPU will not respond to MESI requests from other CPU's until the instruction is finished. There are 2 flavors of concurrent progress conditions, blocking and non-blocking. Atomic RMW instructions are non-blocking. CPU hardware will never

Thread has its own copy of data?

我只是一个虾纸丫 提交于 2020-07-20 03:41:08
问题 I have read somewhere that every threads have its own copy of shared states, even if I'm using synchronized or locks for modifying the variables, what guarantees that the changed state will be flushed into the main memory rather than in thread's own cache memory. I know volatile guarantees and justifies the above scenario, even I know synchronized justifies too. How does synchronized guarantees that changing the value is happening in the main memory rather than thread cache memory. Ex Thread

CompletionStage chaining when 2nd stage is not interested in the 1st stage result value

杀马特。学长 韩版系。学妹 提交于 2020-07-18 08:22:44
问题 Scenario : there are two stages 2nd stage is to be executed only after the 1st one is completed 2nd stage is not interested in the 1st stage result but merely in the fact that the first stage is completed Consider the existing method: public <U> CompletionStage<U> thenApply(Function<? super T,? extends U> fn); It doesn't quite satisfy my needs cause the function is aware of the 1st stage result value ? super T What I would rather like to have is something like: public <U> CompletionStage<U>

Why a sync block of code always call on main thread?

人走茶凉 提交于 2020-07-08 12:43:22
问题 I did the simple test with DispatchQueue: DispatchQueue.global(qos: .background).sync { if Thread.isMainThread { print("Main thread") } } It printed out: Main thread Why does this code execute on the main thread? It should be performed on a background thread (it was added to a background queue), right? 回答1: Because it doesn't actually have to. You're blocking the main thread by using sync. iOS is choosing to just execute it on the main thread instead of bothering to switch to a background

SQlite WAL-mode in python. Concurrency with one writer and one reader

[亡魂溺海] 提交于 2020-07-08 00:39:12
问题 I'm trying to share a sqlite3 database between one writer process, and one reader process. However, it does not work, and it seems to me that nothing is being written in example.db. reader.py import sqlite3 from time import sleep conn = sqlite3.connect('example.db', isolation_level=None) c = conn.cursor() while True: c.execute("SELECT * FROM statistics") try: print '**' print c.fetchone() except: pass sleep(3) writer.py import sqlite3 from time import sleep import os if os.path.exists(

InvocationTargetException when binding to custom class run by JavaFX Concurrent Task

一世执手 提交于 2020-06-29 04:05:22
问题 I'm getting InvocationTargetException and NullPointerException when attempting to bind to custom class run by Task. I have working examples of binding to library classes ObeservableList, Long, Integer etc but now need to bind to values of custom class. I created TaskOutput class that includes StringProperty for binding purposes as follows: public class TaskOutput { private final StringProperty textValue = new SimpleStringProperty(); public TaskOutput(String textValue) { this.textValue.set