avfoundation

Reading video frame-by-frame under iOS

纵饮孤独 提交于 2019-12-02 17:43:59
I'm looking for a way to retrieve the individual frames of a video using iOS API. I tried using AVAssetImageGenerator but it seems to only provide frame to the nearest second which is a bit too rough for my usage. From what I understand of the documentation, a pipeline of AVAssetReader, AVAssetReaderOutput and CMSampleBufferGetImageBuffer I should be able to do something but I'm stuck with a CVImageBufferRef. With this I'm looking for a way to get a CGImageRef or a UIImage but haven't found it. Real-time is not needed and the more I can stick to provided API the better. Thanks a lot! Edit:

How do I add a still image to an AVComposition?

半世苍凉 提交于 2019-12-02 17:43:11
I have an AVMutableComposition with a video track and I would like to add a still image into the video track, to be displayed for some given time. The still image is simply a PNG. I can load the image as an asset, but that’s about it, because the resulting asset does not have any tracks and therefore cannot be simply inserted using the insertTimeRange… methods. Is there a way to add still images to a composition? It looks like the answer is somewhere in Core Animation, but the whole thing seems to be a bit above my head and I would appreciate a code sample or some information pointers. OK.

Audio playback progress as UISlider in Swift

随声附和 提交于 2019-12-02 17:38:57
I've seen some posts about accomplishing this in Objective-C but I've been unable to do the same via Swift. Specifically, I can't figure out how to implement addPeriodicTimeObserverForInterval in the below. var player : AVAudioPlayer! = nil @IBAction func playAudio(sender: AnyObject) { playButton.selected = !(playButton.selected) if playButton.selected { let fileURL = NSURL(string: toPass) player = AVAudioPlayer(contentsOfURL: fileURL, error: nil) player.numberOfLoops = -1 // play indefinitely player.prepareToPlay() player.delegate = self player.play() startTime.text = "\(player.currentTime)"

AVFoundation - Reverse an AVAsset and output video file

爷,独闯天下 提交于 2019-12-02 17:34:06
I've seen this question asked a few times, but none of them seem to have any working answers. The requirement is to reverse and output a video file (not just play it in reverse) keeping the same compression, format, and frame rate as the source video. Ideally, the solution would be able to do this all in memory or buffer and avoid generating the frames into image files (for ex: using AVAssetImageGenerator ) and then recompiling it (resource intensive, unreliable timing results, changes in frame/image quality from original, etc.). -- My contribution: This is still not working, but the best I've

Can I use AVCaptureSession to encode an AAC stream to memory?

只谈情不闲聊 提交于 2019-12-02 17:21:31
I'm writing an iOS app that streams video and audio over the network. I am using AVCaptureSession to grab raw video frames using AVCaptureVideoDataOutput and encode them in software using x264 . This works great. I wanted to do the same for audio, only that I don't need that much control on the audio side so I wanted to use the built in hardware encoder to produce an AAC stream. This meant using Audio Converter from the Audio Toolbox layer. In order to do so I put in a handler for AVCaptudeAudioDataOutput 's audio frames: - (void)captureOutput:(AVCaptureOutput *)captureOutput

how to control orientation of video assembled with AVMutableComposition

寵の児 提交于 2019-12-02 17:21:15
I am assembling a bunch of video clips filmed on the iPhone in portrait mode. To assemble them I am taking straightforward approach as follows: AVURLAsset to get hold of the different videos then shoving these into an AVMutableCompositionTrack and then putting this into an AVMutableComposition which I'm exporting to file with AVAssetExportSession My problem is that when I come to display the video in a UIWebView, it is appearing in landscape mode. However, if I view any of the component views they appear in Portrait view. Does anyone know how to sort out the orientation. I tried messing around

AVAssetExportSession using AVAssetExportPresetPassthrough breaking output

浪尽此生 提交于 2019-12-02 16:59:37
问题 I'm using AVAssetExportSession in combination with AVAssetExportPresetPassthrough to stitch multiple videos together. Everything works quite fine, except after my first sub-clip should have finished, it's picture "freezes" on the last frame but the second is not going to play. I made sure to set the layer opacity to 0.0f once each clip has finished, if I use another Preset-Type everything works... Any hints? 回答1: I got in touch with Apple, they told me this is a bug, please find the the

AVVideoCompositionCoreAnimationTool and CALayer in portrait mode?

点点圈 提交于 2019-12-02 15:59:44
I'm trying to bake a CALayer into portrait-mode video (on export) using an AVMutableComposition, an AVMutableVideoComposition and a AVVideoCompositionCoreAnimationTool on iOS 4.3. This all works in landscape. If I capture video in portrait, however, the AVVideoCompositionCoreAnimationTool is ignoring the transform on the video track. That is, for portrait-mode video, I am setting AVMutableCompositionTrack.preferredTransform to the preferredTransform value from the original asset video track. As long as I don't use a AVVideoCompositionCoreAnimationTool, this works, and the video comes out in

Getting Error when trying to download a m3u8 video using AVAssetDownloadURLSession in iOS

心不动则不痛 提交于 2019-12-02 15:19:36
问题 Trying to download a .m3u8 video file (http://devimages.apple.com/iphone/samples/bipbop/bipbopall.m3u8) using AVAssetDownloadURLSession. When i run the code in Xcode, I get an error: "Error Domain=AVFoundationErrorDomain Code=-11800 \"The operation could not be completed\" UserInfo={NSLocalizedFailureReason=An unknown error occurred (-12780), NSLocalizedDescription=The operation could not be completed}" . The code that I have used: import UIKit import AVFoundation class ViewController:

CoreAnimation, AVFoundation and ability to make Video export

老子叫甜甜 提交于 2019-12-02 14:58:44
I'm looking for the correct way to export my pictures sequence into a quicktime video. I know that AV Foundation have the ability to merge or recombine videos and also to add audio track building a single video Asset. Now ... my goal is a little bit different. I would to create a video from scratch. I have a set of UIImage and I need to render all of them in a single video. I read all the Apple Documentation about AV Foundation and i found the AVVideoCompositionCoreAnimationTool class that have the ability to take a CoreAnimation and reencode it as a video. I also checked the AVEditDemo