How to color manage AVAssetWriter output

China☆狼群 提交于 2019-12-02 04:10:09

This is quite a confusing subject and the Apple docs really do not help all that much. I am going to describe the solution I have settled on based on using the BT.709 colorspace, I am sure someone will have an objection based on Colorimetric correctness and the weirdness of various video standards, but this is complex topic. First off, don't use kCVPixelFormatType_32ARGB as the pixel type. Always pass kCVPixelFormatType_32BGRA instead, since BGRA is the native pixel layout on both MacOSX and iPhone hardware and it BGRA is just faster. Next, when you create a CGBitmapContext to render into use the BT.709 colorspace (kCGColorSpaceITUR_709). Also, don't render into a malloc() buffer, render directly into the CoreVideo pixel buffer by creating a bitmap context over the same memory, CoreGraphics will handle the colorspace and gamma conversion from whatever your input image is to BT.709 and its associated gamma. Then you need to tell AVFoundation the colorspace of the pixel buffer, do that by making an ICC profile copy and setting the kCVImageBufferICCProfileKey on the CoreVideo pixel buffer. That takes care of your issues 1 and 2, you do not need to have input images in this same colorspace with this approach. Now, this is of course complex and actual working source code (yes actually working) is hard to come by. Here is a github link to a small project that does these exact steps, the code is BSD licensed, so feel free to use it. Note specifically the H264Encoder class which wraps all this horror up into a reusable module. You can find calling code in encode_h264.m, it is a little MacOSX command line util to encode PNG to M4V. Also attached 3 keys Apple docs related to this subject 1, 2, 3.

MetalBT709Decoder

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!