Within our iOS app, we are using custom filters using Metal (CIKernel/CIColorKernel wrappers).
Let\'s assume we have a 4K video and a custom video composition with a
As warrenm says, you could use a CILanczosScaleTransform
filter to downsample the video frames before processing. However, this would still cause AVFoundation to allocate buffers in full resolution.
I assume you use a AVMutableVideoComposition
to do the filtering? In this case you can just set the renderSize
of the composition to the target size. From the docs:
The size at which the video composition should render.
This will tell AVFoundation to resample the frames (efficiently, fast) before handing them to your filter pipeline.