avfoundation

What is the best/fastest way to convert CMSampleBufferRef to OpenCV IplImage?

落花浮王杯 提交于 2019-12-02 19:52:15
I am writing an iPhone app that does some sort of real-time image detection with OpenCV. What is the best way to convert a CMSampleBufferRef image from the camera (I'm using AVCaptureVideoDataOutputSampleBufferDelegate of AVFoundation) into an IplImage that OpenCV understands? The conversion needs to be fast enough so it can run realtime. - (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection { NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init]; // Convert CMSampleBufferRef into

Add GIF watermark on a video in iOS

China☆狼群 提交于 2019-12-02 19:51:50
I need to accomplish this function: There is a GIF overlay on a video, hoping to composition this video and GIF to a new video. I'm using the following code, but result is only the video without GIF: - (void)mixVideoAsset:(AVAsset *)videoAsset { LLog(@"Begining"); NSDate * begin = [NSDate date]; // 2 - Create AVMutableComposition object. This object will hold your AVMutableCompositionTrack instances. AVMutableComposition *mixComposition = [[AVMutableComposition alloc] init]; // 3 - Video track AVMutableCompositionTrack *videoTrack = [mixComposition addMutableTrackWithMediaType:AVMediaTypeVideo

How Can I Record the Screen with Acceptable Performance While Keeping the UI Responsive?

痞子三分冷 提交于 2019-12-02 19:41:27
I'm looking for help with a performance issue in an Objective-C based iOS app. I have an iOS application that captures the screen's contents using CALayer's renderInContext method. It attempts to capture enough screen frames to create a video using AVFoundation. The screen recording is then combined with other elements for research purposes on usability. While the screen is being captured, the app may also be displaying the contents of a UIWebView, going out over the network to fetch data, etc... The content of the Web view is not under my control - it is arbitrary content from the Web. This

Show camera stream while AVCaptureSession's running

℡╲_俬逩灬. 提交于 2019-12-02 19:39:14
I was able to capture video frames from the camera using AVCaptureSession according to http://developer.apple.com/iphone/library/qa/qa2010/qa1702.html . However, it seems that AVCaptureScreen captures frames from the camera without showing the camera stream on the screen. I would like to also show camera stream just like in UIImagePicker so that the user knows that the camera is being turned on and sees what the camera is pointed at. Any help or pointer would be appreciated! AVCaptureVideoPreviewLayer is exactly what you're looking for. The code fragment Apple uses to demonstrate how to use it

Using Apple's new AudioEngine to change Pitch of AudioPlayer sound

烂漫一生 提交于 2019-12-02 19:38:40
I am currently trying to get Apple's new audio engine working with my current audio setup. Specifically, I am trying to change the pitch with Audio Engine, which apparently is possible according to this post . I have also looked into other pitch changing solutions including Dirac and ObjectAL, but unfortunately both seem to be pretty messed up in terms of working with Swift, which I am using. My question is how do I change the pitch of an audio file using Apple's new audio engine. I am able to play sounds using AVAudioPlayer, but I am not getting how the file is referenced in audioEngine. In

Can AVFoundation be coerced into playing a local .ts file?

落爺英雄遲暮 提交于 2019-12-02 19:36:05
Clearly, AVFoundation (and Quicktime X) can demux and play properly encoded .ts containers, because .ts containers underly HTTPS live streaming. Short of setting up a local web service to serve the .m3u8 and associated .ts files, I'd really like to be able to either: convince AVURLAsset and/or URLAssetWithURL to accept a local file .m3u8 URI as if it were an HTTP URI, or better yet, be able to use AVQueuePlayer to load and play a sequence of .ts files without jumping through the live streaming hoops. The reason I'm wanting to do this is that I need to locally generate movie assets on-the-fly

How to transform vision framework coordinate system into ARKit?

耗尽温柔 提交于 2019-12-02 19:29:47
I am using ARKit (with SceneKit) to add the virtual object (e.g. ball). I am tracking real world object (e.g. foot) by using Vision framework and receiving its updated position in vision request completion handler method. let request = VNTrackObjectRequest(detectedObjectObservation: lastObservation, completionHandler: self.handleVisionRequestUpdate) I wants to replace the tracked real world object with virtual (for example replace foot with cube) but I am not sure how to replace the boundingBox rect (which we receive in vision request completion) into scene kit node as coordinate system are

Swift 3 sound play

对着背影说爱祢 提交于 2019-12-02 19:18:25
Ok I have looked into this and have tried many different ways to play a sound when a button is clicked. How would I play a sound when a button is clicked in swift 3? I have my sound in a folder named Sounds and the name is ClickSound.mp3 User below this function //MARK:- PLAY SOUND func playSound() { let url = Bundle.main.url(forResource: "ClickSound", withExtension: "mp3")! do { player = try AVAudioPlayer(contentsOf: url) guard let player = player else { return } player.prepareToPlay() player.play() } catch let error as NSError { print(error.description) } } first import AudioToolbox import

iPhone 7 Plus - AVFoundation dual camera

心已入冬 提交于 2019-12-02 19:16:22
I'm actively researching this at the moment, but now that the iPhone 7 Plus has a dual camera system, will AVFoundation allow you to handle video frames from each specific camera simultaneously? I am thinking/hoping that I'll be able to handle output from two AVCaptureDevice instances at the same time given a certain position. In the updated AVFoundation documentation ( AVCaptureDeviceType ) there're new device types: builtInWideAngleCamera and builtInTelephotoCamera . Hence, it should be possible to create multiple capture sessions and get the feedback from both of them at the same time. You

First frame of a video using AVFoundation

馋奶兔 提交于 2019-12-02 19:16:14
I'm trying to get the first frame of a video using the classes in AVFoundation. But it appears to not be getting an image at all. My code currently looks like this AVURLAsset* asset = [AVURLAsset URLAssetWithURL:[NSURL URLWithString:videoPath] options:nil]; AVAssetImageGenerator* imageGenerator = [AVAssetImageGenerator assetImageGeneratorWithAsset:asset]; UIImage* image = [UIImage imageWithCGImage:[imageGenerator copyCGImageAtTime:CMTimeMake(0, 1) actualTime:nil error:nil]]; [videoFrame setImage:image]; The value of video path is /var/mobile/Applications/02F42CBF-D8BD-4155-85F2-8CF1E55B5023