What's the theory behind computing variance of an image?

萝らか妹 提交于 2020-12-29 06:07:33

问题


I am trying to compute the blurriness of an image by using LaplacianFilter.

According to this article: https://www.pyimagesearch.com/2015/09/07/blur-detection-with-opencv/ I have to compute the variance of the output image. The problem is I don't understand conceptually how do I compute variance of an image.

Every pixel has 4 values for every color channel, therefore I can compute the variance of every channel, but then I get 4 values, or even 16 by computing variance-covariance matrix, but according to the OpenCV example, they have only 1 number.

After computing that number, they just play with the threshold in order to make a binary decision, whether the image is blurry or not.

PS. by no means I am an expert on this topic, therefore my statements can make no sense. If so, please be nice to edit the question.


回答1:


First thing first, if you see the tutorial you gave, they convert the image to a greyscale, thus it will have only 1 channel and 1 variance. You can do it for each channel and try to compute a more complicated formula with it, or just use the variance over all the numbers... However I think the author also converts it to greyscale since it is a nice way of fusing the information and in one of the papers that the author supplies actually says that

A well focused image is expected to have a high variation in grey levels.

The author of the tutorial actually explains it in a simple way. First, think what the laplacian filter does. It will show the well define edges here is an example using the grid of pictures he had. (click on it to see better the details)

As you can see the blurry images barely have any edges, while the focused ones have a lot of responses. Now, what would happen if you calculate the variance. let's imagine the case where white is 255 and black is 0. If everything is black... then the variance is low (cases of the blurry ones), but if they have like half and half then the variance is high.

However, as the author already said, this threshold is dependent on the domain, if you take a picture of a sky even if it is focus it may have low variance, since it is quite similar and does not have very well define edges...

I hope this answer your doubts :)




回答2:


On sentence description:

The blured image's edge is smoothed, so the variance is small.


1. How variance is calculated.

The core function of the post is:

def variance_of_laplacian(image):
    # compute the Laplacian of the image and then return the focus
    # measure, which is simply the variance of the Laplacian
    return cv2.Laplacian(image, cv2.CV_64F).var()

As Opencv-Python use numpy.ndarray to represent the image, then we have a look on the numpy.var:

Help on function var in module numpy.core.fromnumeric:

var(a, axis=None, dtype=None, out=None, ddof=0, keepdims=<class 'numpy._globals$
    Compute the variance along the specified axis.

    Returns the variance of the array elements, a measure of the spread of a distribution.
    The variance is computed for the flattened array by default, otherwise over the specified axis.

2. Using for picture

This to say, the var is calculated on the flatten laplacian image, or the flatted 1-D array.

To calculate variance of array x, it is:

var = mean(abs(x - x.mean())**2)


For example:

>>> x = np.array([[1, 2], [3, 4]])
>>> x.var()
1.25
>>> np.mean(np.abs(x - x.mean())**2)
1.25

For the laplacian image, it is edged image. Make images using GaussianBlur with different r, then do laplacian filter on them, and calculate the vars:

The blured image's edge is smoothed, so the variance is little.



来源:https://stackoverflow.com/questions/48319918/whats-the-theory-behind-computing-variance-of-an-image

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!