core-video

Fast Screen Recording of iOS App

坚强是说给别人听的谎言 提交于 2020-01-26 01:18:26
问题 I'm new to OpenGL ES. I'm trying to write code for screen recording of iOS apps, especially games. I'm using the 'render to texture' method described with code in this answer to capture screen and write the video for a cocos2d game. One modification I made was that, when I call CVOpenGLESTextureCacheCreate then I'm using [EAGLContext currentContext] instead of [[GPUImageOpenGLESContext sharedImageProcessingOpenGLESContext] context] It does record the video but there are two issues (1)- When

Creating copy of CMSampleBuffer in Swift returns OSStatus -12743 (Invalid Media Format)

╄→гoц情女王★ 提交于 2020-01-24 05:46:05
问题 I am attempting to perform a deep clone of CMSampleBuffer to store the output of a AVCaptureSession . I am receiving the error kCMSampleBufferError_InvalidMediaFormat (OSStatus -12743) when I run the function CMSampleBufferCreateForImageBuffer . I don't see how I've mismatched the CVImageBuffer and the CMSampleBuffer format description. Anyone know where I've gone wrong? Her is my test code. func captureOutput(captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer

Are CVOpenGL[ES]TextureCaches incompatible with floating point formats?

|▌冷眼眸甩不掉的悲伤 提交于 2020-01-05 13:50:53
问题 On OSX10.4/iOS5 and onwards you can optimize your texture uploads and downloads using CVOpenGL[ES]TextureCaches . Instead of uploading textures with glTexImage2D and reading from the frame buffer with glReadPixels , you use a CVOpenGL[ES]TextureCache to translate your texture/FBO operations into the language of CoreVideo CVPixelBuffers . This works perfectly well with byte (and probably short) sized integer formats, but, apart from a fancy YUV pixel format, floats are decidedly under

How to get Bytes from CMSampleBufferRef , To Send Over Network

那年仲夏 提交于 2019-12-31 07:56:52
问题 Am Captuing video using AVFoundation frame work .With the help of Apple Documentation http://developer.apple.com/library/ios/#documentation/AudioVideo/Conceptual/AVFoundationPG/Articles/03_MediaCapture.html%23//apple_ref/doc/uid/TP40010188-CH5-SW2 Now i did Following things 1.Created videoCaptureDevice 2.Created AVCaptureDeviceInput and set videoCaptureDevice 3.Created AVCaptureVideoDataOutput and implemented Delegate 4.Created AVCaptureSession - set input as AVCaptureDeviceInput and set

How to directly update pixels - with CGImage and direct CGDataProvider

笑着哭i 提交于 2019-12-29 15:00:23
问题 Actual Question Several answers will solve my problem: Can I force a CGImage to reload its data from a direct data provider (created with CGDataProviderCreateDirect ) like CGContextDrawImage does? Or is there some other way I can get setting to self.layer.contents to do it? Is there a CGContext configuration, or trick I can use to render 1024x768 images at least 30 fps consistently with CGContextDrawImage . Has anyone been able to successfully use CVOpenGLESTextureCacheCreateTextureFromImage

How to directly update pixels - with CGImage and direct CGDataProvider

佐手、 提交于 2019-12-29 14:58:49
问题 Actual Question Several answers will solve my problem: Can I force a CGImage to reload its data from a direct data provider (created with CGDataProviderCreateDirect ) like CGContextDrawImage does? Or is there some other way I can get setting to self.layer.contents to do it? Is there a CGContext configuration, or trick I can use to render 1024x768 images at least 30 fps consistently with CGContextDrawImage . Has anyone been able to successfully use CVOpenGLESTextureCacheCreateTextureFromImage

Drawing CGImageRef in YUV

偶尔善良 提交于 2019-12-23 03:33:08
问题 I am using the code here to convert a CGImageRef into a CVPixelBufferRef on OS X . Convert UIImage to CVImageBufferRef However, I need the image to be drawn in YUV (kCVPixelFormatType_420YpCbCr8Planar) instead of RBG as it is now. Is there anyway to directly draw a CGImage in YUV colorspace? And if not, does anyone have an example for the best way to go about converting the CVPixedBufferRef from RBG into YUV ? I understand the formulas for the conversion but doing it on the CPU is painfully

Render dynamic text onto CVPixelBufferRef while recording video

喜夏-厌秋 提交于 2019-12-21 01:12:51
问题 I'm recording video and audio using AVCaptureVideoDataOutput and AVCaptureAudioDataOutput and in the captureOutput:didOutputSampleBuffer:fromConnection: delegate method, I want to draw text onto each individual sample buffer I'm receiving from the video connection. The text changes with about every frame (it's a stopwatch label) and I want that to be recorded on top of the video data that's captured. Here's what I've been able to come up with so far: //1. CVPixelBufferRef pixelBuffer =