avfoundation

Transform wrong when using both AVComposition and AVVideoComposition

故事扮演 提交于 2021-02-11 07:38:04
问题 I am creating an AVMutableComposition . I need the asset to be flipped horizontally, so I am setting the transform of the composition track like this: compositionTrack.preferredTransform = assetTrack.preferredTransform.scaledBy(x: -1, y: 1) If I export this (I use AVAssetPreset960x640 as my preset), this works as expected. However, I also need to add an AVMutableVideoComposition overlay to be rendered with this copmosition. This overlay shouldn't be flipped horizontally. I specify this like

How to interleave a non-interleaved AudioBufferList inside a render callback?

孤街醉人 提交于 2021-02-11 06:53:34
问题 I'm working on a project that involves streaming audio from an AVPlayer video player object into libpd using an MTAudioProcessingTap . For the process loop of the tap, I used PdAudioUnit s render callback code as a guide; but I realized recently that the audio format expected by libpd is not the same as the audio coming from the tap — that is, the tap is providing two buffers of non-interleaved audio data in the incoming AudioBufferList, whereas libpd expects interleaved samples. I don't

When should I call CVPixelBufferLockBaseAddress and CVPixelBufferUnlockBaseAddress?

一世执手 提交于 2021-02-10 19:52:30
问题 In iOS6, I'm using OpenGL to do some rendering on AVFoundation video frames. I've seen a lot of example code that makes use of CVPixelBufferLockBaseAddress and CVPixelBufferUnlockBaseAddress, but it's unclear to me when exactly I perform the lock and unlock or why I'm doing it. Should I be locking the address when the CPU is modifying the memory? Or should I lock it when the GPU is reading from it? When should I unlock? Why would I ever even want to unlock? I've seen this Stack Overflow

CoreMediaIO, incorrectly updated properties kCMIODevicePropertyDeviceIsRunningSomewhere

南笙酒味 提交于 2021-02-10 07:38:11
问题 I need to receive an event when some process starts using the camera. I did this through CMIOObjectGetPropertyData. But it does not work correctly, the correct value is only the first time it is accessed. I also tried to use the CMIOObjectAddPropertyListenerBlock but he did not work it from me. Tell me please, what am I doing wrong? I will be very grateful. while (1) { UInt32 value = 0; UInt32 valuePropertySize = sizeof(flag); CMIOObjectPropertyAddress opa =

Error Domain=AVFoundationErrorDomain Code=-11814 “Cannot Record”

坚强是说给别人听的谎言 提交于 2021-02-08 12:31:31
问题 It keeps on giving me the error: Error Domain=AVFoundationErrorDomain Code=-11814 "Cannot Record" I am not sure what the problem is? I am trying to record the sound right when the counter reaches 1 after a picture is taken. static int counter; //counter will always be zero it think unless it is assigned. if (counter == 0){ dispatch_async([self sessionQueue], ^{ // Update the orientation on the still image output video connection before capturing. [[[self stillImageOutput]

Get used exposure duration and ISO values after the capture is complete from the AVCapturePhotoOutput

↘锁芯ラ 提交于 2021-02-08 10:01:01
问题 Background I am using AVCaptureSession with AVCapturePhotoOutput to save captures as JPEG images. let captureSession = AVCaptureSession() let stillImageOutput = AVCapturePhotoOutput() var captureDevice : AVCaptureDevice? ... func setupCamera() { captureDevice = AVCaptureDevice.default(AVCaptureDevice.DeviceType.builtInWideAngleCamera, for: AVMediaType.video, position: .back) if (captureDevice != nil) { captureSession.addInput(try AVCaptureDeviceInput(device: captureDevice!)) if captureSession

Get used exposure duration and ISO values after the capture is complete from the AVCapturePhotoOutput

点点圈 提交于 2021-02-08 09:59:48
问题 Background I am using AVCaptureSession with AVCapturePhotoOutput to save captures as JPEG images. let captureSession = AVCaptureSession() let stillImageOutput = AVCapturePhotoOutput() var captureDevice : AVCaptureDevice? ... func setupCamera() { captureDevice = AVCaptureDevice.default(AVCaptureDevice.DeviceType.builtInWideAngleCamera, for: AVMediaType.video, position: .back) if (captureDevice != nil) { captureSession.addInput(try AVCaptureDeviceInput(device: captureDevice!)) if captureSession

Connecting AVAudioSourceNode to AVAudioSinkNode does not work

…衆ロ難τιáo~ 提交于 2021-02-08 08:26:19
问题 Context I am writing a signal interpreter using AVAudioEngine which will analyse microphone input. During development, I want to use a default input buffer so I don't have to make noises for the microphone to test my changes. I am developing using Catalyst. Problem I am using AVAudioSinkNode to get the sound buffer (the performance is allegedly better than using .installTap ). I am using (a subclass of) AVAudioSourceNode to generate a sine wave. When I connect these two together, I expect the

AVFoundation take picture every second SWIFT

泄露秘密 提交于 2021-02-07 10:38:19
问题 I'm trying to use AVFoundation framework to take a picture and analyze it in my app. I want it to take a picture every second automatically, how do I do that? Here is my current code, right now it takes a picture only when call capturePhoto(). func setupSession() { session = AVCaptureSession() session.sessionPreset = AVCaptureSessionPresetPhoto let camera = AVCaptureDevice .defaultDeviceWithMediaType(AVMediaTypeVideo) do { input = try AVCaptureDeviceInput(device: camera) } catch { return }

iOS 10 breaks custom CIFilter

和自甴很熟 提交于 2021-02-07 08:29:16
问题 I have written a chromakey filter for making the backgrounds of MPEG movies transparent so that you can use a movie file for longer animations without the need for lengthy sequences of PNGs (as is commonly done for some types of iOS animations). I am using AVPlayer , AVVideoComposition , and a custom CIFilter to render the video over a background image. The background image can be changed dynamically by the user interacting with the app. This used to work just fine until iOS 10 came out and