Simplified screen capture: record video of only what appears within the layers of a UIView?

风格不统一 提交于 2019-11-28 02:06:15

It sounds like your "fast" merge doesn't involve (re)-encoding frames, i.e. it's trivial and basically a glorified file concatenation, which is why it's getting 60x realtime. I asked about that because your "very slow" export is from 3-6 times realtime, which actually isn't that terrible (at least it wasn't on older hardware).

Encoding frames with an AVAssetWriter should give you an idea of the fastest possible non-trivial export and this may reveal that on modern hardware you could halve or quarter your export times.

This is a long way of saying that there might not be that much more performance to be had. If you think about the typical iOS video encoding use case, which would probably be recording 1920p @ 120 fps or 240 fps, your encoding at ~6x realtime @ 30fps is in the ballpark of what your typical iOS device "needs" to be able to do.

There are optimisations available to you (like lower/variable framerates), but these may lose you the convenience of being able to capture CALayers.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!