video-toolbox

Referencing `self` from within a VTCompressionOutputCallback [duplicate]

。_饼干妹妹 提交于 2021-02-16 21:07:30
问题 This question already has answers here : How to use instance method as callback for function which takes only func or literal closure (2 answers) Closed 1 year ago . I'm currently trying to use VideoToolbox to encode video data from an AVCaptureVideoDataOutput , but I'm having an issue referencing self from within the VTCompressionOutputCallback . My code is as follows: ... var sessionRef: VTCompressionSession? let outputCallback: VTCompressionOutputCallback = { _, _, status, _, sampleBuffer

Decode h264 video stream to get image buffer

一笑奈何 提交于 2020-02-02 07:14:45
问题 I followed this post to decode my h264 video stream frames. My data frames as bellow: My code: NSString * const naluTypesStrings[] = { @"0: Unspecified (non-VCL)", @"1: Coded slice of a non-IDR picture (VCL)", // P frame @"2: Coded slice data partition A (VCL)", @"3: Coded slice data partition B (VCL)", @"4: Coded slice data partition C (VCL)", @"5: Coded slice of an IDR picture (VCL)", // I frame @"6: Supplemental enhancement information (SEI) (non-VCL)", @"7: Sequence parameter set (non-VCL

Decode h264 video stream to get image buffer

[亡魂溺海] 提交于 2020-02-02 07:13:25
问题 I followed this post to decode my h264 video stream frames. My data frames as bellow: My code: NSString * const naluTypesStrings[] = { @"0: Unspecified (non-VCL)", @"1: Coded slice of a non-IDR picture (VCL)", // P frame @"2: Coded slice data partition A (VCL)", @"3: Coded slice data partition B (VCL)", @"4: Coded slice data partition C (VCL)", @"5: Coded slice of an IDR picture (VCL)", // I frame @"6: Supplemental enhancement information (SEI) (non-VCL)", @"7: Sequence parameter set (non-VCL

Decode h264 video stream to get image buffer

限于喜欢 提交于 2020-02-02 07:13:05
问题 I followed this post to decode my h264 video stream frames. My data frames as bellow: My code: NSString * const naluTypesStrings[] = { @"0: Unspecified (non-VCL)", @"1: Coded slice of a non-IDR picture (VCL)", // P frame @"2: Coded slice data partition A (VCL)", @"3: Coded slice data partition B (VCL)", @"4: Coded slice data partition C (VCL)", @"5: Coded slice of an IDR picture (VCL)", // I frame @"6: Supplemental enhancement information (SEI) (non-VCL)", @"7: Sequence parameter set (non-VCL

Why does AVSampleBufferDisplayLayer fail with Operation Interrupted (-11847)?

非 Y 不嫁゛ 提交于 2019-12-22 06:56:41
问题 I'm using an AVSampleBufferDisplayLayer to decode and display H.264 video streamed from a server. When my app goes into the background and then returns to the foreground, the decoding process gets screwed up and the AVSampleBufferDisplayLayer fails. The error I'm seeing is: H.264 decoding layer has failed: Error Domain=AVFoundationErrorDomain Code=-11847 "Operation Interrupted" UserInfo=0x17426c500 {NSUnderlyingError=0x17805fe90 "The operation couldn’t be completed. (OSStatus error -12084.)",

Set rate at which AVSampleBufferDisplayLayer renders sample buffers

做~自己de王妃 提交于 2019-12-19 04:23:43
问题 I am using an AVSampleBufferDisplayLayer to display CMSampleBuffers which are coming over a network connection in the h.264 format. Video playback is smooth and working correctly, however I cannot seem to control the frame rate. Specifically, if I enqueue 60 frames per second in the AVSampleBufferDisplayLayer it displays those 60 frames, even though the video is being recorded at 30 FPS. When creating sample buffers, it is possible to set the presentation time stamp by passing a timing info

VideoToolbox does not create an encoder session for mpeg4 in Swift 3.0

喜夏-厌秋 提交于 2019-12-08 03:22:17
问题 I`ve catch a problem to create a compression session for the MPEG4 encoder with VideoToolbox after migration on Swift 3.0. Before the migration it worked fine. Here is my upgraded code: let imageAttributes:[NSString: AnyObject] = [ kCVPixelBufferPixelFormatTypeKey: Int(colorScheme) as AnyObject, kCVPixelBufferIOSurfacePropertiesKey: [:] as AnyObject, kCVPixelBufferOpenGLESCompatibilityKey: true as AnyObject, kCVPixelBufferWidthKey: outputWidth as AnyObject, kCVPixelBufferHeightKey:

VideoToolbox does not create an encoder session for mpeg4 in Swift 3.0

半腔热情 提交于 2019-12-06 15:36:45
I`ve catch a problem to create a compression session for the MPEG4 encoder with VideoToolbox after migration on Swift 3.0. Before the migration it worked fine. Here is my upgraded code: let imageAttributes:[NSString: AnyObject] = [ kCVPixelBufferPixelFormatTypeKey: Int(colorScheme) as AnyObject, kCVPixelBufferIOSurfacePropertiesKey: [:] as AnyObject, kCVPixelBufferOpenGLESCompatibilityKey: true as AnyObject, kCVPixelBufferWidthKey: outputWidth as AnyObject, kCVPixelBufferHeightKey: outputHeight as AnyObject, ] let imgeAttributesDictionary: CFDictionary = imageAttributes as CFDictionary let

How to extract motion vectors from H.264 AVC CMBlockBufferRef after VTCompressionSessionEncodeFrame

一世执手 提交于 2019-12-06 13:51:53
问题 I'm trying read or understand CMBlockBufferRef representation of H.264 AVC 1/30 frame. The buffer and the encapsulating CMSampleBufferRef is created by using VTCompressionSessionRef . https://gist.github.com/petershine/de5e3d8487f4cfca0a1d H.264 data is represented as AVC memory buffer, CMBlockBufferRef from the compressed sample. Without fully decompressing again , I'm trying to extract motion vectors or predictions from this CMBlockBufferRef . I believe that for the fastest performance,

How to set MaxH264SliceBytes property of VTCompressionSession

点点圈 提交于 2019-12-06 09:22:11
问题 iOS VTCompressionSession has a property which is kVTCompressionPropertyKey_MaxH264SliceBytes . However, I cannot set the kVTCompressionPropertyKey_MaxH264SliceBytes property of VTCompressionSession . It returns a -12900 error code (kVTPropertyNotSupportedErr) and the description in VTCompressionProperties.h file says "If supported by an H.264 encoder, the value limits the size in bytes of slices produced by the encoder, where possible." So I understand that usage of this property is supported