avcapturesession

Video Saving in the wrong orientation AVCaptureSession

假如想象 提交于 2019-11-30 08:30:19
I'm trying to record a video (without displaying the camera) and save it. But the video being saved is not saving in the right orientation. I've tried forcing the UIViewController to be a certain orientation but that didn't help. All videos are being recorded in portrait. My code is below: session = [[AVCaptureSession alloc] init]; [session beginConfiguration]; session.sessionPreset = AVCaptureSessionPresetHigh; AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo]; for (AVCaptureDevice *cam in [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo]) { if (cam

Captured photo is stretched with AVCaptureSession sessionPreset = AVCaptureSessionPresetPhoto

五迷三道 提交于 2019-11-30 07:31:45
问题 IMPORTANT: if I use: session.sessionPreset = AVCaptureSessionPresetHigh; my preview image is not stretched !! If I save the photo to the device UIImageWriteToSavedPhotosAlbum(image, nil, nil, nil); the image is normal, only in the preview it is stretched. I m using AVFoundation to capture photo. session = [[AVCaptureSession alloc] init]; session.sessionPreset = AVCaptureSessionPresetHigh; CALayer *viewLayer = vImagePreview.layer; NSLog(@"viewLayer = %@", viewLayer); AVCaptureVideoPreviewLayer

How to get the Y component from CMSampleBuffer resulted from the AVCaptureSession?

荒凉一梦 提交于 2019-11-30 06:24:14
问题 Hey there, I am trying to access raw data from iphone camera using AVCaptureSession. I follow the guide provided by Apple (link here). The raw data from the samplebuffer is in YUV format ( Am I correct here about the raw video frame format?? ), how to directly obtain the data for Y component out of the raw data stored in the samplebuffer. 回答1: When setting up the AVCaptureVideoDataOutput that returns the raw camera frames, you can set the format of the frames using code like the following:

How to capture frame-by-frame images from iPhone video recording in real time

点点圈 提交于 2019-11-30 05:27:21
I am trying to measure the saturation of a selected color in real-time, like this: I am following this guide from Apple. I updated the code to work with ARC, and of course made my view controller an AVCaptureVideoDataOutputSampleBufferDelegate , but I don't know how to actually start capturing the data, as in starting up the camera to get some actual input. Here is my code: #import "ViewController.h" @interface ViewController () @property (nonatomic, strong) AVCaptureSession *session; @property (nonatomic, strong) AVCaptureVideoPreviewLayer *previewLayer; @end @implementation ViewController -

Performance issues when using AVCaptureVideoDataOutput and AVCaptureAudioDataOutput

穿精又带淫゛_ 提交于 2019-11-30 05:25:03
I'm having lag issues when I'm recording audio+video by using AVCaptureVideoDataOutput and AVCaptureAudioDataOutput. Sometimes the video blocks for a few milliseconds, sometimes the audio is not in sync with the video. I inserted some logs and observed that first I get a lot of video buffers in captureOutput callback, and after some time I get the audio buffers(sometimes I don't receive the audio buffers at all, and the resulting video is without sound). If I comment the code that handles the video buffers, I get the audio buffers without problems. This is the code I'm using: -(void

Applying Effect to iPhone Camera Preview “Video”

不羁的心 提交于 2019-11-30 04:34:25
My goal is to write a custom camera view controller that: Can take photos in all four interface orientations with both the back and, when available, front camera. Properly rotates and scales the preview "video" as well as the full resolution photo. Allows a (simple) effect to be applied to BOTH the preview "video" and full resolution photo. Implementation (on iOS 4.2 / Xcode 3.2.5): Due to requirement (3), I needed to drop down to AVFoundation. I started with Technical Q&A QA1702 and made these changes: Changed the sessionPreset to AVCaptureSessionPresetPhoto. Added an

AVCaptureSession stopRunning method creates terrible hang

孤街浪徒 提交于 2019-11-30 04:13:14
Using Ray Wenderlich's QRCode reader from Chapter 22 of iOS7 Tutorials , I am successfully reading QRCodes for my current app. I am now extending it that upon successfully reading a QRCode, I want to store the stringValue of the AVMetadataMachineReadableCodeObject that was read, segue to a new view, and use that data on the new view, more or less exactly how most QRCode reader apps (like RedLaser , etc...) process barcodes and QRCodes. However, I call [captureSession stopRunning] (so that it does not read any more QRCodes and trigger additional segues) and there is a 10+ second hang. I have

Alternatives to creating an openGL texture from a captured video frame to overlay an openGL view over video? (iPhone)

為{幸葍}努か 提交于 2019-11-30 04:05:27
This is mostly relevant for augmented reality type applications. Apple provides information on how to capture video frames (and save them as images if need be) with AVCaptureSession here: http://developer.apple.com/library/ios/#qa/qa2010/qa1702.html I know that it is possible to create an openGL texture out a captured video frame and then use that as a background in the openGL view over which to overlay other graphics. I am wondering if there are any alternatives to this method? The method mentioned above may be the best (I don't know if it is) but if there are alternatives to try it would be

Run multiple AVCaptureSessions or add multiple inputs

社会主义新天地 提交于 2019-11-30 03:29:51
I want to display the stream of the front and the back facing camera of an iPad2 in two UIViews next to each other. To stream the image of one device I use the following code AVCaptureDeviceInput *captureInputFront = [AVCaptureDeviceInput deviceInputWithDevice:[AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo] error:nil]; AVCaptureSession *session = [[AVCaptureSession alloc] init]; session addInput:captureInputFront]; session setSessionPreset:AVCaptureSessionPresetMedium]; session startRunning]; AVCaptureVideoPreviewLayer *prevLayer = [AVCaptureVideoPreviewLayer layerWithSession

How can I extract an AVMetadataObject from a UIImage?

£可爱£侵袭症+ 提交于 2019-11-30 02:14:00
I'd like to use iOS 7's new barcode scanning functionality with a UIImage instead of live capture from one of the device's camera. I already have the detection working fine with an AVCaptureDeviceInput . The best way I think to do this would be to create a concrete subclass of AVCaptureInput that provides media data to an AVCaptureSession from a UIImage . However, I can't find any documentation or examples on how to subclass AVCaptureInput , so I'm at a loss. An alternative would be to override the media stream from an existing AVCaptureDeviceInput , but since those API's are private and I'd