In my iPad app, I am rendering to an offscreen bitmap, and then drawing the bitmap to the screen. (This is because I want to re-use existing bitmap rendering code.) On the i
It looks like CoreGraphics is internally doubling the pixels, and then sending that to the GPU,
Pretty much. More accurately (in spirit at least):
CGBitmapContext the size of your view's bounds, in device pixelsCGImage into that contextCGImage from the bitmap contextthe CGImage that I'm making should be fine for passing to the GPU directly.
If you want that to happen, you need to tell the system to do that, by cutting out some of the steps above.
(There is no link between UIKit, CoreAnimation, and CoreGraphics that provides a "fast path" like you are expecting.)
The easiest way would be to make a UIImageView, and set its image to a UIImage wrapping your CGImageRef.
Or, set your view.layer.contents to your CGImageRef. (And make sure to not override -drawRect:, not call -setNeedsDisplay, and make sure contentMode is not UIViewContentModeRedraw. Easier to just use UIImageView.)