问题
How would I take the live frames from the iPhone camera, convert them to grayscale, and then display them on the screen in my application?
回答1:
To expand upon what Tommy said, you'll want to use AVFoundation in iOS 4.0 to capture the live camera frames. However, I'd recommend using OpenGL directly to do the image processing because you won't be able to achieve realtime results on current hardware otherwise.
For OpenGL ES 1.1 devices, I'd look at using Apple's GLImageProcessing sample application as a base (it has an OpenGL greyscale filter within it) and running your live video frames through that.
For OpenGL ES 2.0, you might want to use a programmable shader to achieve this effect. I show how to process live iPhone camera data through various filters in this sample application using shaders, with a writeup on how that works here.
In my benchmarks, the iPhone 4 can do this processing at 60 FPS with programmable shaders, but you only get about 4 FPS if you rely on CPU-bound code to do this.
Since I wrote the above, I've now created an open source framework that encapsulates this OpenGL ES 2.0 video processing, and it has a built-in grayscale filter that you can use for this. You can use a GPUImageGrayscaleFilter applied to a video source to do a fast conversion to black and white, or a GPUImageSaturationFilter to selectively desaturate this video by a controlled amount. Look at the SimpleVideoFilter example so see how this can be applied to a live video feed, then recorded to disk.
回答2:
You need to use iOS 4.0, which allows you — at last — just to start the camera and receive frames as raw data as and when they're ready. You can then process the frames however you want and put them on screen as you prefer.
Best thing to do is to grab WWDC session 409 ("Using the Camera with AV Foundation") after logging in here, which should get you as far as being able to produce your own variant on the UIImagePickerController.
To convert from an RGB to a brightness, you probably want the quick formula:
brightness = (0.257 * R) + (0.504 * G) + (0.098 * B) + 16;
Which comes from the standard RGB to YUV conversion formulas, such as described here. Depending on how you're getting your image to screen, you might then be able to store those values directly (such as if you're going to OpenGL — just upload as a luminance texture) or store R, G and B as:
1.164(brightness - 16)
(from the same source)
回答3:
Instead of doing any kind of conversion, use kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange ('420v') and grab the Y-Plane (Luma) data - which is only 8-bit: 25% the amount of data you'd be uploading to a texture in OpenGLES if you were to use BGRA. No need to do any sort of RGB->YUV conversion, and it will work in both OpenGLES 1.1 and 2.0 without needing to do blending or shader effects.
来源:https://stackoverflow.com/questions/4381513/how-do-i-convert-the-live-video-feed-from-the-iphone-camera-to-grayscale