I\'m using glReadPixels to read data into a CVPixelBufferRef. I use the CVPixelBufferRef as the input into an AVAssetWriter
Better than using the CPU to swap the components would be to write a simple fragment shader to efficiently do it on the GPU as you render the image.
And the best bestest way is to completely remove the copying stage by using an iOS5 CoreVideo CVOpenGLESTextureCache which allows you to render straight to the CVPixelBufferRef, eliminating the call to glReadPixels.
p.s. I'm pretty sure AVAssetWriter wants data in BGRA format (actually it probably wants it in yuv, but that's another story).
UPDATE: as for links, the doco seems to still be under NDA, but there are two pieces of freely downloadable example code available:
GLCameraRipple and RosyWriter
The header files themselves contain good documentation, and the mac equivalent is very similar (CVOpenGLTextureCache), so you should have plenty to get you started.