avfoundation

AVPlayerLayer animates frame changes

吃可爱长大的小学妹 提交于 2019-12-20 10:23:45
问题 Whenever I change the frame of my AVPlayerLayer, the video is not resized immediately, but animated to the new size. For example: I change the frame from (0, 0, 100, 100) to (0, 0, 400, 400), the view's frame is changed immediately, but the video's size is animated to the new size. Has anyone encountered this issue? And if yes does someone know a way to disable the default animation? Thanks! 回答1: You can try disabling implicit actions and using zero length animations: CALayer *videolayer = <#

Using CIFilter with AVFoundation (iOS)

放肆的年华 提交于 2019-12-20 10:09:50
问题 I am trying to apply filters to a video composition created with AVFoundation on iOS (filters could be, eg, blur, pixelate, sepia, etc). I need to both apply the effects in real-time and be able to render the composite video out to disk, but I'm happy to start with just one or the other. Unfortunately, I can't seem to figure this one out. Here's what I can do: I can add a layer for animation to the UIView that's playing the movie, but it's not clear to me if I can process the incoming video

How to decode a H.264 frame on iOS by hardware decoding?

拜拜、爱过 提交于 2019-12-20 09:57:18
问题 I have been used ffmpeg to decode every single frame that I received from my ip cam. The brief code looks like this: -(void) decodeFrame:(unsigned char *)frameData frameSize:(int)frameSize{ AVFrame frame; AVPicture picture; AVPacket pkt; AVCodecContext *context; pkt.data = frameData; pat.size = frameSize; avcodec_get_frame_defaults(&frame); avpicture_alloc(&picture, PIX_FMT_RGB24, targetWidth, targetHeight); avcodec_decode_video2(&context, &frame, &got_picture, &pkt); } The code woks fine,

How to decode a H.264 frame on iOS by hardware decoding?

杀马特。学长 韩版系。学妹 提交于 2019-12-20 09:57:00
问题 I have been used ffmpeg to decode every single frame that I received from my ip cam. The brief code looks like this: -(void) decodeFrame:(unsigned char *)frameData frameSize:(int)frameSize{ AVFrame frame; AVPicture picture; AVPacket pkt; AVCodecContext *context; pkt.data = frameData; pat.size = frameSize; avcodec_get_frame_defaults(&frame); avpicture_alloc(&picture, PIX_FMT_RGB24, targetWidth, targetHeight); avcodec_decode_video2(&context, &frame, &got_picture, &pkt); } The code woks fine,

Knowing resolution of AVCaptureSession's session presets

泪湿孤枕 提交于 2019-12-20 09:56:30
问题 I'm accessing the camera in iOS and using session presets as so: captureSession.sessionPreset = AVCaptureSessionPresetMedium; Pretty standard stuff. However, I'd like to know ahead of time the resolution of the video I'll be getting due to this preset (especially because depending on the device it'll be different). I know there are tables online you can look this up (such as here: http://cmgresearch.blogspot.com/2010/10/augmented-reality-on-iphone-with-ios40.html ). But I'd like to be able to

Cropping a captured image exactly to how it looks in AVCaptureVideoPreviewLayer

♀尐吖头ヾ 提交于 2019-12-20 09:39:07
问题 I have a photo app that is using AV Foundation. I have setup a preview layer using AVCaptureVideoPreviewLayer that takes up the top half of the screen. So when the user is trying to take their photo, all they can see is what the top half of the screen sees. This works great, but when the user actually takes the photo and I try to set the photo as the layer's contents, the image is distorted. I did research and realized that I would need to crop the image. All I want to do is crop the full

Show camera stream while AVCaptureSession's running

时间秒杀一切 提交于 2019-12-20 09:31:56
问题 I was able to capture video frames from the camera using AVCaptureSession according to http://developer.apple.com/iphone/library/qa/qa2010/qa1702.html. However, it seems that AVCaptureScreen captures frames from the camera without showing the camera stream on the screen. I would like to also show camera stream just like in UIImagePicker so that the user knows that the camera is being turned on and sees what the camera is pointed at. Any help or pointer would be appreciated! 回答1:

Compositing 2 videos on top of each other with alpha

不羁岁月 提交于 2019-12-20 09:25:36
问题 AVFoundation allows you to "compose" 2 assets (2 videos) as 2 "tracks", just like in Final Cut Pro, for example. The theory says I can have 2 videos on top of each other, with alpha, and see both. Either I'm doing something wrong, or there's a bug somewhere, because the following test code, although a bit messy, clearly states I should see 2 videos, and I only see one, as seen here: http://lockerz.com/s/172403384 -- the "blue" square is IMG_1388.m4v For whatever reason, IMG_1383.MOV is never

AVCapture appendSampleBuffer

半世苍凉 提交于 2019-12-20 09:24:41
问题 I am going insane with this one - have looked everywhere and tried anything and everything I can thinks of. Am making an iPhone app that uses AVFoundation - specifically AVCapture to capture video using the iPhone camera. I need to have a custom image that is overlayed on the video feed included in the recording. So far I have the AVCapture session set up, can display the feed, access the frame, save it as a UIImage and marge the overlay Image onto it. Then convert this new UIImage into a

Rotating Video with AVMutableVideoCompositionLayerInstruction

被刻印的时光 ゝ 提交于 2019-12-20 09:20:14
问题 I'm shooting video on an iPhone 4 with the front camera and combining the video with some other media assets. I'd like for this video to be portrait orientation - the default orientation for all video is landscape and in some circumstances, you have to manage this manually. I'm using AVFoundation and specifically AVAssetExportSession with a AVMutableVideoComposition. Based on the WWDC videos, it's clear that I have to handle 'fixing' the orientation myself when I'm combining videos into a new