What's the theory behind computing variance of an image?

前端 未结 2 1166
孤城傲影
孤城傲影 2021-02-14 21:41

I am trying to compute the blurriness of an image by using LaplacianFilter.

According to this article: https://www.pyimagesearch.com/2015/09/07/blur-detection-with-open

2条回答
  •  天命终不由人
    2021-02-14 22:42

    First thing first, if you see the tutorial you gave, they convert the image to a greyscale, thus it will have only 1 channel and 1 variance. You can do it for each channel and try to compute a more complicated formula with it, or just use the variance over all the numbers... However I think the author also converts it to greyscale since it is a nice way of fusing the information and in one of the papers that the author supplies actually says that

    A well focused image is expected to have a high variation in grey levels.

    The author of the tutorial actually explains it in a simple way. First, think what the laplacian filter does. It will show the well define edges here is an example using the grid of pictures he had. (click on it to see better the details)

    As you can see the blurry images barely have any edges, while the focused ones have a lot of responses. Now, what would happen if you calculate the variance. let's imagine the case where white is 255 and black is 0. If everything is black... then the variance is low (cases of the blurry ones), but if they have like half and half then the variance is high.

    However, as the author already said, this threshold is dependent on the domain, if you take a picture of a sky even if it is focus it may have low variance, since it is quite similar and does not have very well define edges...

    I hope this answer your doubts :)

提交回复
热议问题