问题
Context: I have an iOS game application which uses GCD. For the application, I have three queues : Main Queue, Game Logic Queue (Custom serial), Physics Queue (Custom serial). Physics Queue is used to do the physics simulation and Game Queue is used to do the game logic. So for every update (every 1/60 seconds), each queue does its respective work and then shares it with other queues by scheduling blocks on the other queues.
Problem:
With GCD: When I play a game level i.e. the queue are doing some work, I see a very rapid growth in my heap/allocations which leads to the app crashing due to memory problems. If I quit the level and come to a non-game view i.e. the queue are NOT doing any work, the memory slowly drops down (takes about 2 minutes) and becomes stable. In the image attached, the peak in the image is just before I quit the game level and come outside. After that there is a steady drop in memory as the objects get deallocated.
Without GCD: If I disable the other two queues and run everything on the main queue i.e. I eliminate all concurrency from my code, I do NOT see any significant heap growth and the game works just fine.
Already studied/tried/researched on the internet: I have a brief understanding of the concepts of block capture and the block being copied to the heap but am not very sure. As far as my understanding goes, I have been unable to find any such object in my code as when I quit the game level and come to the non-game view, all the objects that are expected to be deallocated are being deallocated.
Questions:
- The App with GCD creates a lot of blocks. Is it a good practice to create a lot of blocks?
- On running instruments, I find that the objects rapidly getting allocated but not getting released are of the category Malloc 48. The Responsible Library for these objects is libsystem_blocks.dylib and the Responsible caller is _Block_copy_internal. These objects slowly get deallocated once I come out of my game level i.e. when the queues stop performing any work. However, the deallocation is very slow and takes about 2 minutes to fully clean up. Is there any way this clean up can be accelerated? My suspision is that the objects keep piling up and then cause the memory crash.
Any ideas as to what may be going on?
Thanks in advance.

Based on the suggestions from the below comments I wrote the following test code. I basically Scheduled a call back from a CADisplayLink and then in the call back I scheduled 5000 blocks onto a custom queue.
// In a simple bare-bones view controller template I wrote the following code
- (void)viewDidLoad
{
[super viewDidLoad];
self.objDisplayLink = [CADisplayLink displayLinkWithTarget:self selector:@selector(loop:)];
[self.objDisplayLink setFrameInterval:1/60];
[self.objDisplayLink addToRunLoop:[NSRunLoop currentRunLoop] forMode:NSDefaultRunLoopMode];
}
- (void)loop:(CADisplayLink*)lobjDisplayLink
{
static int lintNumBlocks = 0;
if (lintNumBlocks < 5000)
{
dispatch_async(self.testQueueTwo,
^{
@autoreleasepool
{
NSLog(@"Block Number ; %d", lintNumBlocks);
int outerIndex = 1000;
while (outerIndex--)
{
NSLog(@"Printing (%d, %d)", outerIndex, lintNumBlocks);
}
dispatch_async(dispatch_get_main_queue(),
^{
@autoreleasepool
{
NSString* lstrString = [NSString stringWithFormat:@"Finished Block %d", lintNumBlocks];
self.objDisplayLabel.text = lstrString;
}
});
}
});
lintNumBlocks++;
}
else
{
self.objDisplayLabel.text = @"Finished Running all blocks";
[self.objDisplayLink invalidate];
}
}
This code also gives the same heap growth with GCD as mentioned in the previous post. But surprisingly in this code the memory never drops back down to its initial level. The instruments output is as follows:

What is wrong with this code? Any ideas would help.
回答1:
I've seen this same sort of memory buildup around a GCD queue when I had a CADisplayLink firing every 60th of a second, but a frame rendering block that took longer than that to complete. Blocks will pile up in the queue, and as you see they have some overhead associated with them.
Mike Ash has a great writeup about this, where he demonstrates the consequences of building up processing blocks, along with ways of alleviating some of this pressure. Additionally, this was covered in a recent WWDC session on GCD, along with how to diagnose this in Instruments, but I can't find the specific session right now.
In my case, I ended up using something similar to what Mike arrived at, and I use a dispatch semaphore to prevent accumulation of blocks in memory. I describe this approach in this answer, along with code. What I do is use a semaphore with a maximum count of 1, then check that before dispatching a new block. If another block of that type is sitting on the serial queue, I bail and don't throw another on the pile. Once a block has finished executing, I decrease the semaphore count so another can be added.
It sounds like you need something like this to manage the addition of new blocks to your queues, because you'll want to be able to drop frames as load increases in your game.
回答2:
the main queue has an autorelease pool that is drained in every iteration of the runloop
the concurrent queues don't Id think.. at least NSThreads by default don't
wrap your code in the queue in @autoreleasepool
回答3:
Try using dispatch_async_f instead of dispatch_async. It avoids the block copy.
来源:https://stackoverflow.com/questions/16229759/rapid-heap-growth-with-grand-central-dispatch