grand-central-dispatch

calling asynchronous method inside for-loop [duplicate]

ε祈祈猫儿з 提交于 2019-12-06 16:29:27
This question already has answers here : What's the best way to iterate over results from an APi, and know when it's finished? (1 answer) Swift Closures in for loop (1 answer) How do you run code after an async call finishes and the code is outside of the asyn function in swift3 (1 answer) Closed last year . Trying to make my for loop behave synchronously when I am making an asynchronous call in each iteration of the loop. I have a feeling I will need to use Grand Central Dispatch in some way but not sure. func test(strings: [String], completion: @escaping ((_ value: [String]) -> Void)) { var

When creating thread safe reads in Swift, why is a variable create outside the concurrent queue?

半城伤御伤魂 提交于 2019-12-06 15:44:26
public class Account { // MARK: Initializer // Custom initializer // MARK: Stored Properties let concurrentQueue: DispatchQueue = DispatchQueue( label: "concurrentQueue", qos: DispatchQoS.userInitiated, attributes: [DispatchQueue.Attributes.concurrent] ) private var _name: String public name: String { get { return self.concurrentQueue.sync { return self._name } } set { self.concurrentQueue.async(flags: .barrier) { self._name = newValue } } } } Let's say you have a class like above where you want thread safety. What is the difference between the getter in the Account class and defining the

C++11 app that uses dispatch_apply not working under Mac OS Sierra

泄露秘密 提交于 2019-12-06 14:38:54
I had a completely functioning codebase written in C++11 that used Grand Central Dispatch parallel processing, specifically dispatch_apply to do the basic parallel for loop for some trivial game calculations. Since upgrading to Sierra, this code still runs, but each block is run in serial -- the cout statement shows that they are being executed in serial order, and CPU usage graph shows no parallel working on. Queue is defined as: workQueue = dispatch_queue_create("workQueue", DISPATCH_QUEUE_CONCURRENT); And the relevant program code is: case Concurrency::Parallel: { dispatch_apply(stateMap

IOS thread pool

孤者浪人 提交于 2019-12-06 14:16:09
I've got this method -(void)addObjectToProcess(NSObject*)object; and i want this method to add the object to process queue which can process up to 4 objects in parallel. i've created my own dispatch_queue and semhphore _concurrentQueue = dispatch_queue_create([queue_id UTF8String],DISPATCH_QUEUE_CONCURRENT); _processSema = dispatch_semaphore_create(4); and the implementation of the method is: -(void)addObjectToProcess(NSObject*)object { dispatch_semaphore_wait(self.processSema, DISPATCH_TIME_FOREVER); __weak MyViewController* weakSelf = self; dispatch_async(self.concurrentQueue, ^{ // PROCESS.

Difference between DispatchQueue types in swift

好久不见. 提交于 2019-12-06 13:35:36
问题 As I understand there are 3 types of DispatchQueue in swift: Main (serial) (Main Thread) Global (Concurrent) (Background Threads working in parallel) Custom (Concurrent or serial) And each one maybe work (asynch or synch) First question: Is it main queue working on UI thread only and not working on another thread? If the answer yes , how DispatchQueue.Main.async not blocking UI thread. If the answer No , what is the benefit of using DispatchQueue.global as long as DispatchQueue.Main.async

Geocoding Multiple Locations - Knowing When “All” Completion Blocks Have Been Called

梦想的初衷 提交于 2019-12-06 11:29:39
I am using the CoreLocation's geocoder to get the CLLocation coordinates for multiple map items. The geocoder calls a completion block on completion for each item. How do I create a similar block functionality which is called when all of these containing asynchronous geocoder calls have been completed? (I could use a manual counter. But there must be a more elegant solution) Here's my geocoding function so far. It loops through an array of location items and starts a new geocoding process for each. -(void)geoCodeAllItems { for (EventItem* thisEvent in [[EventItemStore sharedStore] allItems]) {

correct way to wait for dispatch_semaphore in order to wait for many async tasks to complete

我只是一个虾纸丫 提交于 2019-12-06 11:29:32
问题 I have an asynchronous method longRunningMethodOnObject:completion: this method receives an object of type 'Object' - does work with its data and then calls the completion handler. I need to call many different "longRunningMethods" and wait for all to complete. I would like all of the "longRunningMethodOnObject" to run asynchronously (parallel) to each other in the "for" loop. (I am not certain if the "longRunningMethodOnObject" runs in serial to each other but this is more of a general

Clarifications on dispatch_queue, reentrancy and deadlocks

无人久伴 提交于 2019-12-06 11:12:47
问题 I need a clarifications on how dispatch_queue s is related to reentrancy and deadlocks. Reading this blog post Thread Safety Basics on iOS/OS X, I encountered this sentence: All dispatch queues are non-reentrant, meaning you will deadlock if you attempt to dispatch_sync on the current queue. So, what is the relationship between reentrancy and deadlock? Why, if a dispatch_queue is non-reentrant, does a deadlock arise when you are using dispatch_sync call? In my understanding, you can have a

Swift 3 GCD lock variable and block_and_release error

一个人想着一个人 提交于 2019-12-06 10:34:38
问题 I am using Swift 3 GCD in order to perform some operations in my code. But I'm getting _dispatch_call_block_and_release error often. I suppose the reason behind this error is because different threads modify same variable, but I'm not sure how to fix problem. Here is my code and explanations: I have one variable which is accessed and modified in different threads: var queueMsgSent: Dictionary<Date,BTCommand>? = nil func lock(obj: AnyObject, blk:() -> ()) { objc_sync_enter(obj) blk() objc_sync

How does a serial dispatch queue guarantee resource protection?

心不动则不痛 提交于 2019-12-06 09:46:09
//my_serial_queue is a serial_dispatch_queue dispatch_async(my_serial_queue, ^{ //access a shared resource such as a bank account balance [self changeBankAccountBalance]; }); If I submit 100 tasks that each access and mutate a bank account balance, I understand that a serial queue will execute each task sequentially, but do these tasks finish sequentially as well when using dispatch_async? What if task #23 that I submit asynchronously to the serial queue takes a really long time to finish? Would task #24 start only when task #23 is done, or would task #24 begin before task #23 is done? If so,