avfoundation

Movement by a single frame in CMTime and AVFoundation

一笑奈何 提交于 2019-12-03 07:39:12
I'm attempting to play a video with AVFoundation. I am using the following code for a button that advances the playback by one frame. It works intermittently, on some executions it will do the right thing and advance one frame, but most times I will have to press the button 3 or 4 times before it will advance a frame. This makes me think it is some kind of precision issue, but I can't figure out what it is. Each time it is run the new CMTime appears to be advancing by the same amount. My other theory is that it could be caused by the currentTime not being set to an exact frame boundary at my

AVMutableComposition of a Solid Color with No AVAsset

点点圈 提交于 2019-12-03 07:36:03
问题 Here's my end goal: I'd like to use AVVideoCompositionCoreAnimationTool to create a video from Core Animation. I will not be using an existing AVAsset in this composition. My question is, how can I use AVMutableComposition to make a video with a static solid color for a given amount of time? After I figure that out, I can add the animation. Here's my code: - (void)exportVideo { AVMutableComposition *mixComposition = [AVMutableComposition composition]; CMTimeRange timeRange = CMTimeRangeMake

AVAudioSession AVAudioSessionCategoryPlayAndRecord glitch

耗尽温柔 提交于 2019-12-03 07:30:24
问题 I would like to record videos with audio using AVCaptureSession . For this I need the AudioSessionCategory AVAudioSessionCategoryPlayAndRecord , since my app also plays back video with sound. I want audio to be audible from the default speaker and I want it to mix with other audio. So I need the options AVAudioSessionCategoryOptionDefaultToSpeaker | AVAudioSessionCategoryOptionMixWithOthers . If I do the following while other audio is playing there is a clear audible glitch in the audio from

How to transform vision framework coordinate system into ARKit?

家住魔仙堡 提交于 2019-12-03 06:53:15
问题 I am using ARKit (with SceneKit) to add the virtual object (e.g. ball). I am tracking real world object (e.g. foot) by using Vision framework and receiving its updated position in vision request completion handler method. let request = VNTrackObjectRequest(detectedObjectObservation: lastObservation, completionHandler: self.handleVisionRequestUpdate) I wants to replace the tracked real world object with virtual (for example replace foot with cube) but I am not sure how to replace the

iPhone 7 Plus - AVFoundation dual camera

こ雲淡風輕ζ 提交于 2019-12-03 06:42:31
问题 I'm actively researching this at the moment, but now that the iPhone 7 Plus has a dual camera system, will AVFoundation allow you to handle video frames from each specific camera simultaneously? I am thinking/hoping that I'll be able to handle output from two AVCaptureDevice instances at the same time given a certain position. 回答1: In the updated AVFoundation documentation (AVCaptureDeviceType) there're new device types: builtInWideAngleCamera and builtInTelephotoCamera . Hence, it should be

xcode: How to save audio file after recording audio using AVFoundation

主宰稳场 提交于 2019-12-03 06:26:18
问题 I browsed through all kinds of post related to this topic but the answers just do not really help. I used this tutorial to implement recording of audio files and playback. What seems to be missing is how to save the record permanently . When I exit my app the sound file is there but nothing is in it. I don't even know if it is saving the rocerd or just creating the file. Here is a code sample: NSArray *dirPaths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);

What is the best/fastest way to convert CMSampleBufferRef to OpenCV IplImage?

拟墨画扇 提交于 2019-12-03 06:25:44
问题 I am writing an iPhone app that does some sort of real-time image detection with OpenCV. What is the best way to convert a CMSampleBufferRef image from the camera (I'm using AVCaptureVideoDataOutputSampleBufferDelegate of AVFoundation) into an IplImage that OpenCV understands? The conversion needs to be fast enough so it can run realtime. - (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *

Add GIF watermark on a video in iOS

我们两清 提交于 2019-12-03 06:25:44
问题 I need to accomplish this function: There is a GIF overlay on a video, hoping to composition this video and GIF to a new video. I'm using the following code, but result is only the video without GIF: - (void)mixVideoAsset:(AVAsset *)videoAsset { LLog(@"Begining"); NSDate * begin = [NSDate date]; // 2 - Create AVMutableComposition object. This object will hold your AVMutableCompositionTrack instances. AVMutableComposition *mixComposition = [[AVMutableComposition alloc] init]; // 3 - Video

How Can I Record the Screen with Acceptable Performance While Keeping the UI Responsive?

折月煮酒 提交于 2019-12-03 06:18:16
问题 I'm looking for help with a performance issue in an Objective-C based iOS app. I have an iOS application that captures the screen's contents using CALayer's renderInContext method. It attempts to capture enough screen frames to create a video using AVFoundation. The screen recording is then combined with other elements for research purposes on usability. While the screen is being captured, the app may also be displaying the contents of a UIWebView, going out over the network to fetch data,

Play video on the android recorded from the iPhone

送分小仙女□ 提交于 2019-12-03 06:10:51
I am writing video based social app for iOS and android(WinPhone is under waiting). I recorded video in mov format using AVFoundation framework on the iPhone and uploaded it to the server. It can be downloaded and played on the iPhone client. But on the android device, downloaded video can not be played since it's format is not supported on the android. What is the best solution of the video record and play for supporting multiple mobile devices platforms? Comradsky Blog post @ Why Apple Is Winning the Mobile Video Format War...For Now Android uses the flash plugin, and apple uses HLS Today’s