avfoundation

Help me understand CMTime in AVAssetWriter

大憨熊 提交于 2019-12-01 01:35:20
I'm having a hard time understanding how to convert a stream of motion JPEG at 30fps using the AVAssetWriter to a video file. The part I'm not getting is the [adaptor appendPixelBuffer:buffer withPresentationTimeresentTime] method. How do I calculate the withPresentationTime value if I want to output 30fps mpeg4 video? The video source is a camera that streams 30fps motion JPEG in real time. Appreciate any idea. Thanks Steve McFarlin You will need to generate a CMTime structure using CMTimeMake. You will need to increment the time by 1/30 of a second for each frame. Here is a sketch: CMTime

Error Domain=AVFoundationErrorDomain Code=-11821 “Cannot Decode”

纵饮孤独 提交于 2019-12-01 01:26:31
There's a strange behaviour I've found when trying to merge videos with AVFoundation. I'm pretty sure that I've made a mistake somewhere but I'm too blind to see it. My goal is just to merge 4 videos (later there will be crossfade transition between them). Everytime I'm trying to export video I get this error: Error Domain=AVFoundationErrorDomain Code=-11821 "Cannot Decode" UserInfo=0x7fd94073cc30 {NSLocalizedDescription=Cannot Decode, NSLocalizedFailureReason=The media data could not be decoded. It may be damaged.} The funniest thing is that if I don't provide AVAssetExportSession with

Split CMSampleBufferRef containing Audio

做~自己de王妃 提交于 2019-12-01 00:59:26
I'am splitting the recording into different files while recording... The problem is, captureOutput video and audio sample buffers doesn't correspond 1:1 (which is logical) - (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection AUDIO START: 36796.833236847 | DURATION: 0.02321995464852608 | END: 36796.856456802 VIDEO START: 36796.842089239 | DURATION: nan | END: nan AUDIO START: 36796.856456805 | DURATION: 0.02321995464852608 | END: 36796.87967676 AUDIO START: 36796.879676764 | DURATION: 0

Switching AVCaptureSession preset when capturing a photo

落爺英雄遲暮 提交于 2019-12-01 00:53:13
问题 My current setup is as follows (based on the ColorTrackingCamera project from Brad Larson): I'm using a AVCaptureSession set to AVCaptureSessionPreset640x480 for which I let the output run through an OpenGL scene as a texture. This texture is then manipulated by a fragment shader. I'm in need of this "lower quality" preset because I want to preserve a high framerate when the user is previewing. I then want to switch to a higher quality output when the user captures a still photo. First I

CMSampleBufferSetDataBufferFromAudioBufferList returning error 12731

不羁的心 提交于 2019-12-01 00:49:10
I am trying to capture app sound and pass it to AVAssetWriter as input. I am setting callback for audio unit to get AudioBufferList. The problem starts with converting AudioBufferList to CMSampleBufferRef. It always return error -12731 which indicates that required parameter is missing Thanks Karol -(OSStatus) recordingCallbackWithRef:(void*)inRefCon flags:(AudioUnitRenderActionFlags*)flags timeStamp:(const AudioTimeStamp*)timeStamp busNumber:(UInt32)busNumber framesNumber:(UInt32)numberOfFrames data:(AudioBufferList*)data { AudioBufferList bufferList; bufferList.mNumberBuffers = 1; bufferList

How seperate y-planar, u-planar and uv-planar from yuv bi planar in ios?

☆樱花仙子☆ 提交于 2019-12-01 00:11:50
In application i used AVCaptureVideo. i got video in kCVPixelFormatType_420YpCbCr8BiPlanarFullRange format. now i am getting y-planar and uv-planar from imagebuffer. CVPlanarPixelBufferInfo_YCbCrBiPlanar *planar = CVPixelBufferGetBaseAddress(imageBuffer); size_t y-offset = NSSwapBigLongToHost(planar->componentInfoY.offset); size_t uv-offset = NSSwapBigLongToHost(planar->componentInfoCbCr.offset); here yuv is biplanar format(y+UV). what is UV-planar? is this uuuu,yyyy format or uvuvuvuv format? How to i get u-planar and y-planar seperatly? can any one pls help me? The Y plane represents the

How to play audio sample buffers from AVCaptureAudioDataOutput

与世无争的帅哥 提交于 2019-11-30 23:58:46
The main goal of the app Im trying to make is a peer-to-peer video streaming. (Sort of like FaceTime using bluetooth/WiFi). Using AVFoundation, I was able to capture video/audio sample buffers. Then Im sending the video/audo sample buffer data. Now the problem is to process the sample buffer data in the receiving side. As for the video sample buffer, I was able to get a UIImage from the sample buffer. But for the audio sample buffer, I dont know how to process it so I can play the audio. So the question is how can I process/play the audio sample buffers ? Right now Im just plotting the

Output from AVAssetWriter (UIImages written to video) distorted

巧了我就是萌 提交于 2019-11-30 23:45:18
I am using an AVAssetWriter to encode a series of images to a movie file, following Zoul's answer here: How do I export UIImage array as a movie? . In short my process is: create UIImage from .png file Get CGImage from UIImage convert the CGImage to CVPixelBuffer (using zoul's function pixelBufferFromCGImage exactly) write the frames to .mov using a AVAssetWriterInputPixelBufferAdaptor and AVAssetWriter This is working fine in most cases, however sometimes the .mov file that is encoded is distorted (see picture below). I was wondering if this type of distorted image output is familiar to to

Add custom metadata to video using AVFoundation

扶醉桌前 提交于 2019-11-30 23:27:56
I want to add some info (metadata) to a video. I have found a way to retrieve metadata, but did not find any solution to set or modify metadata. I am using AVURLAsset , AVAssetWritter also AVMutableComposition for creating the video. Use -[AVAssetWriter setMetadata:] . This is set to an NSArray of AVMutableMetadataItem s. Note that you cannot set the value after writing has started. Metadata keys and keyspaces are listed in AVMetadataFormat.h . When using AVMutableComposition , you can set the metadata property on the AVAssetExportSession when you go to write it out, rather than setting it on

AVPlayer not playing MP3 audio file in documents directory iOS

十年热恋 提交于 2019-11-30 23:20:05
I'm using AVPlayer to play a MP3 located in the documents directory (file is confirmed to be in there) - I load the AVPlayerItem wait for the AVPlayerItemStatusReadyToPlay then instantiate the AVPlayer with the item and play. The AVPlayerItemStatusReadyToPlay does get called, but no audio actually plays, anyone have an idea why? - (void)checkFileExists { if (![[NSFileManager defaultManager] fileExistsAtPath:[self.urlForEightBarAudioFile path]]) { [self beginEightBarDownload]; } else { // set up audio [self setupAudio]; //NSLog(@"file %@ already exists", [self.urlForEightBarAudioFile path]); }