concurrency

Reproducing Unexpected Behavior w/Cross-Modifying Code on x86-64 CPUs

╄→尐↘猪︶ㄣ 提交于 2020-01-22 13:43:26
问题 Question What are some ideas for cross-modifying code that could trigger unexpected behavior on x86 or x86-x64 systems, where everything is done correctly in the cross-modifying code, with the exception of executing a serializing instruction on the executing processor prior to executing the modified code? As noted below, I have a Core 2 Duo E6600 processor to test on, which is explicitly mentioned as a processor that is prone to issues regarding this. I will test any ideas shared with me on

iOS5.1: synchronising tasks (wait for a completion)

半世苍凉 提交于 2020-01-22 12:37:06
问题 I have a basic problem synchronizing openWithCompletionHandler: (UIManagedDocument) with the main activities. Situation: I have a singleton class managing a shared UIManagedDocument. This class provides one method which should deliver the document in a normal state (i.e. creates or opens it, whatever is neccessary). But because openWithCompletionHandler: does its main job asynchronously in the background my program should wait with setting up the fetchedResultsController until the document is

What's the point of cache coherency?

梦想的初衷 提交于 2020-01-22 10:59:31
问题 On CPUs like x86, which provide cache coherency, how is this useful from a practical perspective? I understand that the idea is to make memory updates done on one core immediately visible on all other cores. This is a useful property. However, one can't rely too heavily on it if not writing in assembly language, because the compiler can store variable assignments in registers and never write them to memory. This means that one must still take explicit steps to make sure that stuff done in

What's the point of cache coherency?

北城以北 提交于 2020-01-22 10:58:01
问题 On CPUs like x86, which provide cache coherency, how is this useful from a practical perspective? I understand that the idea is to make memory updates done on one core immediately visible on all other cores. This is a useful property. However, one can't rely too heavily on it if not writing in assembly language, because the compiler can store variable assignments in registers and never write them to memory. This means that one must still take explicit steps to make sure that stuff done in

Is using std::async many times for small tasks performance friendly?

前提是你 提交于 2020-01-22 08:43:25
问题 To give some background information, I am processing a saved file, and after using a regular expression to split the file into it's component objects, I then need to process the object's data based on which type of object it is. My current thought is to use parallelism to get a little bit of a performance gain as loading each object is independent of each other. So I was going to define a LoadObject function accepting a std::string for each type of object I'm going to be handling and then

ExecutorService.submit(Task) vs CompletableFuture.supplyAsync(Task, Executor)

依然范特西╮ 提交于 2020-01-22 05:36:29
问题 To run some stuff in parallel or asynchronously I can use either an ExecutorService: <T> Future<T> submit(Runnable task, T result); or the CompletableFuture Api: static <U> CompletableFuture<U> supplyAsync(Supplier<U> supplier, Executor executor); (Lets assume I use in both cases the same Executor) Besides the return type Future vs. CompletableFuture are there any remarkable differences. Or When to use what? And what are the differences if I use the CompletableFuture API with default Executor

is invokeAll() a blocking call in java 7

六月ゝ 毕业季﹏ 提交于 2020-01-21 11:32:47
问题 ExecutorService executorService = Executors.newSingleThreadExecutor(); Set<Callable<String>> callables = new HashSet<Callable<String>>(); callables.add(new Callable<String>() { public String call() throws Exception { return "Task 1"; } }); callables.add(new Callable<String>() { public String call() throws Exception { return "Task 2"; } }); callables.add(new Callable<String>() { public String call() throws Exception { return "Task 3"; } }); List<Future<String>> futures = executorService

What are atomic operations for newbies?

那年仲夏 提交于 2020-01-21 05:11:04
问题 I am a newbie to operating systems and every answer I've found on Stackoverflow is so complicated that I am unable to understand. Can someone provide an explanation for what is an atomic operation For a newbie? My understanding: My understanding is that atomic operation means it executes fully with no interruption ? Ie, it is a blocking operation with no scope of interruption? 回答1: Pretty much, yes. "Atom" comes from greek "atomos" = "uncuttable", and has been used in the sense "indivisible

Difference between Semaphore and Condition (ReentrantLock)

瘦欲@ 提交于 2020-01-21 03:52:44
问题 Does anyone know the differences between the methods acquire () and release () ( java.util.concurrent.Semaphore ) and await () and signal (new ReentrantLock().newCondition() ) . Can you expose a pseudo code for each of these methods? 回答1: Superficially the behavior of these method might look similar - acquire()/await() can make threads block in some cirsumstances and release()/signal() can unblock threads in some circumstances. However Semaphore and Condition serve different purposes: java

Multiple fork() Concurrency

萝らか妹 提交于 2020-01-20 19:42:06
问题 How do you use the fork() command in such a way that you can spawn 10 processes and have them do a small task concurrently. Concurrent is the operative word, many places that show how to use fork only use one call to fork() in their demos. I thought you would use some kind of for loop but i tried and it seems in my tests that the fork()'s are spawning a new process, doing work, then spawning a new process. So they appear to be running sequentially but how can I fork concurrently and have 10