avcapturesession

Why is my audio AVCaptureConnection not active for highest-resolution formats?

随声附和 提交于 2019-12-05 07:27:05
I'm working on an iOS project which uses AVAssetWriter and AVAssetWriterInput to record audio and video to file. Everything seemed to work fine when the video resolution was limited to 720x1280. I'm now trying to take advantage of the AVCaptureDeviceFormats for higher resolutions available on newer iOS devices. The video continues to work fine using any of the AVCaptureDeviceFormats available on the device. However, the audio does not work. I've tracked this down to the active property of my audio AVCaptureConnection, which is NO for the highest-resolution formats, looking like this when I log

AVCaptureSession barcode scan

送分小仙女□ 提交于 2019-12-05 07:06:57
I'm currently working with AVCaptureSession and AVCaptureMetadataOutput . It works perfectly, but I just want to know how to indicate to scan and analyze metadata objects only on a specific region of the AVCaptureVideoPreviewLayer ? Here is a sample of code from a project I have that may help you on the right track // where 'self.session' is previously setup AVCaptureSession // setup metadata capture AVCaptureMetadataOutput *metadataOutput = [[AVCaptureMetadataOutput alloc] init]; [self.session addOutput:metadataOutput]; [metadataOutput setMetadataObjectsDelegate:self queue:dispatch_get_main

setPreferredHardwareSampleRate doesn't work

守給你的承諾、 提交于 2019-12-05 04:01:34
问题 I'm using the code that can be found here. I've tried to change sample rate using: [[AVAudioSession sharedInstance] setPreferredHardwareSampleRate:SAMPLE_RATE error:nil]; Inside the init function on SoundRecoder.m file. (SAMPLE_RATE is 16000.0) When I'm checking the file it seems that still the metadata says the sample rate is 44100, I've tried also to use (as suggested here): AudioSessionSetProperty ( kAudioSessionProperty_PreferredHardwareSampleRate ,sizeof(F64sampleRate) , &F64sampleRate )

Capturing volume levels with AVCaptureAudioDataOutputSampleBufferDelegate in swift

こ雲淡風輕ζ 提交于 2019-12-05 02:48:34
问题 I'm trying to live volume levels using AVCaptureDevice etc it compiles and runs but the values just seem to be random and I keep getting overflow errors as well. EDIT: also is it normal for the RMS range to be 0 to about 20000? if let audioCaptureDevice : AVCaptureDevice = AVCaptureDevice.defaultDeviceWithMediaType(AVMediaTypeAudio){ try audioCaptureDevice.lockForConfiguration() let audioInput = try AVCaptureDeviceInput(device: audioCaptureDevice) audioCaptureDevice.unlockForConfiguration()

iPhone Camera Focussing

蓝咒 提交于 2019-12-05 02:25:43
问题 I used the below code for focusing the iphone camera. But it is not working. I take this code from the AVCam sample code of Apple. Am I doing anything wrong? Is there any method to detect if the iPhone did focussing? -(void) focusAtPoint:(CGPoint)point { AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];; if (device != nil) { if ([device isFocusPointOfInterestSupported] && [device isFocusModeSupported:AVCaptureFocusModeAutoFocus]) { NSError *error; if (

AVCaptureSession rotate | orientation while video transmitting

无人久伴 提交于 2019-12-05 01:43:21
问题 I am developing video streaming application, in which i need to capture front camera video frame and encode then transfer to other end, a typical flow is like this AVCaptureSession -> AVCaptureDeviceInput -> AVCaptureVideoDataOutput -> capture frame --> encode frame --> send frame to other end, it works fine, i have setup the kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange as a frame format. also preview layer being used to show the preview, the problem comes when device orientation gets

Most efficient/realtime way to get pixel values from iOS camera feed in Swift

我们两清 提交于 2019-12-04 19:26:33
There are some discussions on here about similar questions. Like this , but they seem quite outdated, so I thought I'd ask here. I want to get near-realtime RGB pixel values, or even better, a full image RGB histogram from a camera feed in swift 2.0. I want this to be as quick and up to date as possible (~30 fps or higher ideally) Can I get this directly from a AVCaptureVideoPreviewLayer or do I need to capture each frame (async, I assume, if the process takes significant time) then extract pixel values from the jpeg/png render? Some example code, taken from jquave but modified for swift 2.0

IOS adding UIProgressView to AVFoundation AVCaptureMovieFileOutput

元气小坏坏 提交于 2019-12-04 18:53:22
I am using AVCaptureMovieFileOutput to record videos and I want to add a UIProgressView to represent how much time there is left before the video stops recording. I set a max duration of 15 seconds: CMTime maxDuration = CMTimeMakeWithSeconds(15, 50); [[self movieFileOutput] setMaxRecordedDuration:maxDuration]; I can't seem to find if AVCaptureMovieFileOutput has a callback for when the video is recording or for when recording begins. My question is, how can I get updates on the progress of the recording? Or if this isn't something that is available, how can I tell when recording begins in

AVCaptureSession preset creates a photo that is too big

ε祈祈猫儿з 提交于 2019-12-04 15:49:06
I have a photo taking app that is using AVFoundation . Once an image is captured, I have 2 core graphics methods that I have to run in order to properly rotate and crop the image. When testing this using the typical AVFoundation setup for capturing camera photos with a session preset of AVCaptureSessionPresetPhoto , I was always receiving a ton of memory warnings and I spent 2 weeks trying to solve these and I finally gave up. I ended up ditching the typical captureStillImageAsynchronouslyFromConnection setup and using another setup that just "captures" the video frames. This new method works

Converting BGRA to ARGB

◇◆丶佛笑我妖孽 提交于 2019-12-04 14:31:21
问题 I'm reading this tutorial on getting pixel data from the iPhone camera. While I have no issue running and using this code, I need to take the output of the camera data (which comes in BGRA) and convert it to ARGB so that I can use it with an external library. How do I do this? 回答1: If you're on iOS 5.0, you can use vImage within the Accelerate framework to do a NEON-optimized color component swap using code like the following (drawn from Apple's WebCore source code): vImage_Buffer src; src