avfoundation

Add chapter information to existing video with AVFoundation

女生的网名这么多〃 提交于 2019-12-23 01:54:31
问题 I am trying to add chapter markers (text + images) to an existing video in iOS. Reading them is trivially easy with builtin functions: NSLocale * locale = [chapterLocalications lastObject]; NSLog(@"Locale: %@", [locale localeIdentifier]); NSArray *keys = @[AVMetadataCommonKeyTitle, AVMetadataCommonKeyArtwork]; NSArray *chapters = [asset chapterMetadataGroupsWithTitleLocale:locale containingItemsWithCommonKeys:keys]; for (AVTimedMetadataGroup * metadataGroup in chapters) { NSArray * items =

AVFoundation decode prores4444 movie with alpha channel

六眼飞鱼酱① 提交于 2019-12-23 01:27:11
问题 I'm trying to decode a prores4444 video with alpha channel on iOS with Swift to overlay as a complex animation over a user video and to export it to his library. The AVFoundation documentation is not that great and I'm struggling to find any code examples. When I try to use the code below with AVAssetReaderTrackOutput to decode the video I get an "AVAssetReaderOutput does not currently support compressed output" error. let avAssetReaderVideoCompositionOutput =

How to use layer instruction into videoCompositionWithAsset:applyingCIFiltersWithHandler method

天大地大妈咪最大 提交于 2019-12-22 18:26:34
问题 Engineering has provided the following information regarding this issue: CoreImage filtering and layer instruction based composition can't be used simultaneously. Layer instructions won't be run when added to an AVMutableVideoComposition that it is initialized with +[videoCompositionWithAsset:applyingCIFiltersWithHandler:]. To use layer instructions in this case, move the functionality into the handler instead of adding the layer instructions to the AVMutableVideoComposition. This is what i

How to use layer instruction into videoCompositionWithAsset:applyingCIFiltersWithHandler method

随声附和 提交于 2019-12-22 18:26:13
问题 Engineering has provided the following information regarding this issue: CoreImage filtering and layer instruction based composition can't be used simultaneously. Layer instructions won't be run when added to an AVMutableVideoComposition that it is initialized with +[videoCompositionWithAsset:applyingCIFiltersWithHandler:]. To use layer instructions in this case, move the functionality into the handler instead of adding the layer instructions to the AVMutableVideoComposition. This is what i

Fake long exposure on iOS

落花浮王杯 提交于 2019-12-22 14:58:10
问题 I have to implement long exposure photo capabilities to an app. Since i know that this is not really possible i have to fake it. It should work like "Slow Shutter" or "Magic Shutter". Sadly i got no clue how to achieve this. I know how to take images with the camera (through AVFoundation ) but i'm stuck at merging them to fake long shutter times. Possibly i need to manipulate and combine all the images with coregraphics but i'm not sure about this (even the how). Maybe there's a better

Objective-C : No Matter what I do CIDetector is always nil

孤人 提交于 2019-12-22 14:04:15
问题 Trying to get a simple Proof of concept going with Apple's face detection API. I've looked at a couple of other examples including Apple's SquareCam, and this one https://github.com/jeroentrappers/FaceDetectionPOC based on these, it seems like I am following the correct pattern to get the APIs going, but I am stuck. No matter what I do, the CIDetector for my face detector is always nil!!! I would seriously appreciate any help, clues - hints - suggestions! -(void)initCamera{ session = [

AVSynchronizedLayer not synchronizing animation

[亡魂溺海] 提交于 2019-12-22 12:50:36
问题 I'm having issues making the animation use the AVPlayer time instead of the system time. the synchronized layer does not work properly and animations stay synchronized on the system time instead of the player time. I know the player do play. and if I pass CACurrentMediaTime() to the beginTime, the animation start right away as it should when not synchronized. EDIT I can see the red square in its final state since the beginning, which mean the animation has reach its end at the beginning

Short circuiting of audio in VOIP app with CallKit

荒凉一梦 提交于 2019-12-22 10:53:08
问题 I'm using the SpeakerBox app as a basis for my VOIP app. I have managed to get everything working, but I can't seem to get rid of the "short-circuiting" of the audio from the mic to the speaker of the device. In other words, when I make a call, I can hear myself in the speaker as well as the other person's voice. How can I change this? AVAudioSession setup: AVAudioSession *sessionInstance = [AVAudioSession sharedInstance]; NSError *error = nil; [sessionInstance setCategory

Callback when phone call ends? (to resume AVCaptureSession)

谁说胖子不能爱 提交于 2019-12-22 10:33:52
问题 I have a video camera app and I would like it to allow users to capture content while on the phone. I can do this by disconnecting the audio capture when the phone call is received and the session is interrupted, but because the session is no longer interrupted, I now have no way of knowing when the phone call ends and it is ok to reconnect the audio device. If I use this callbacks for AVCaptureSessionWasInterruptedNotification and AVCaptureSessionInterruptionEndedNotification : - (void

Processing all frames in an AVAsset

泄露秘密 提交于 2019-12-22 10:30:29
问题 I am trying to go through each frame in an AVAsset and process each frame as if it were an image. I have not been able to find anything from my searches. The task I am trying to accomplish would look like this in pseudo-code for each frame in asset take the frame as an image and convert to a cvMat Process and store data of center points Store center points in array The only part in that pseudo-code I do not know how to write is the going though each frame and capturing it in an image. Can