FFmpeg: How to convert vertical video with black sides, to video 16:9, with blurred background sides

心已入冬 提交于 2019-11-27 18:02:45

I solved!

ffmpeg -i input.mp4 -lavfi '[0:v]scale=ih*16/9:-1,boxblur=luma_radius=min(h\,w)/20:luma_power=1:chroma_radius=min(cw\,ch)/20:chroma_power=1[bg];[bg][0:v]overlay=(W-w)/2:(H-h)/2,crop=h=iw*9/16' -vb 800K output.webm

Input: https://www.youtube.com/watch?v=17uHCHfgs60
Output: http://www.youtube.com/watch?v=CgZsDLfzrTs

The accepted answer here takes forever to execute because it is doing so much unnecessary computation. We don't need to blur the pixels which we definitely know that will be out of viewport in the output video.

So, a better solution would be to first crop the part of the video which will be visible in the output. We then scale this part to "fill" the viewport. Finally, we overlay the original video on top of it.

Below example assumes that input video has greater
aspect ratio than output video.
                    ┌─────────────┐
┌─────────────┐     │             │
│ Input video │     │   Output    │
│             │     │   video     │
└─────────────┘     │             │
                    │             │
                    └─────────────┘

We will use filter graph to achieve this. Our filter will like below in dot notation:

                [original]
 input --> split -------------------------------> overlay --> output
        │                                          ^
        │[copy]                           [blurred]│
        └──────> crop ──> scale ──> gblur ─────────┘

Assuming the resolution for input video is 1280 x 720, the command looks like below:

ffmpeg -i input.mp4 -vf 'split [original][copy]; [copy] crop=ih*9/16:ih:iw/2-ow/2:0, scale=1280:2282, gblur=sigma=20[blurred]; [blurred][original]overlay=(main_w-overlay_w)/2:(main_h-overlay_h)/2' output.mp4

You can try overlaying the video on a blur image like this.

ffmpeg -i input_video -loop 1 -i input_image -t 10 -filter_complex "
[0:v]scale=-1:720[scaled_video];
[1:v]scale=1280:720,boxblur=50[blur_image];
[blur_image][scaled_video]overlay=(main_w-overlay_w)/2:(main_h-overlay_h)/2[outv]" -c:v libx264 -aspect 1280/720 -map [outv] -map 0:a -c:a copy output_video

Input image will be looped through the duration of the output video by -loop 1. And -t 10 will limit the output video duration to 10 seconds. In this example I used 1280:720 as the output video resolution and scaled the inputs to match this ratio. 0:v refer to the input video and it will be scaled to height of 720 where width will be adjusted accordingly.

Here I used boxblur filter where there are several other like sab, smartblur and unsharp. map will get the specified processed input streams and map the to the output stream accordingly.

Hope this will help you!

I couldn't get either of the previous solutions provided to work using ffmpeg 3.4.2 on Windows.

However this did work:

ffmpeg -i <input_file> -filter_complex "[0:v]scale=ih*16/9:-1,boxblur=luma_radius=min(h\,w)/20:luma_power=1:chroma_radius=min(cw\,ch)/20:chroma_power=1[bg];[bg][0:v]overlay=(W-w)/2:(H-h)/2,crop=h=iw*9/16" <output_file>

Don't forget to replace <input_file> and <output_file> with the appropriate file names.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!