Best threshold for converting grayscale to black and white

大城市里の小女人 提交于 2019-12-04 19:33:20


What's the best way to automatically figure out the best threshold for converting a grayscale image to black and white? I can figure out pretty good threshold values by hand, but I would like to automate choosing the threshold value.

Edit: I've been reading a bit about this problem, and by looking at the histogram for the image can help, e.g. if the image has a bi-modal histogram then choosing a threshold between the modes seems to be reasonably. However, for multi-modal, or flat histograms, it appears more complicated. So I think I have some more reading to do. Thanks to everyone who replied!


0.5 usually ends up loosing a lot of information unless the original image is extremely bright. In fact, any absolute threshold will mess up one kind of images or another.

A better method would be to make a histogram of luminosities and choose a threshold near the mode. This should work better on most images than any absolute threshold.


I would look into an adaptive thresholding algorithm. One such, which is not very hard to implement is Otsus method.

It works by assuming that you have foreground pixels and background pixels and attempts to find the best separation of them.


The K-Means Clustering Method works great if you do the following:

  1. Partitioning the image into Sub Blocks.
  2. Apply The K-Means Clustering on each sub block. The result is a binary image (Let's assume what you wanted is '1' and the rest '0').
  3. Make step 2 again, this time on overlapping blocks.
  4. Apply 'AND' operator on the sub images (For Overlapping Sub Blocks).

It's really easy to do in Matlab.
If needed, I can share the code.


What are your criteria for a "good" threshold? You might want to start with the average grayscale intensity of the image...


I would think that the threshold would depend upon the average darkness (or distribution of colors) upon each image independently. If you go with an arbitrary value, then you'll end up losing a lot of data if the image started out pretty washed out.

Also, you can emulate some of the grayscales by sparsely populating an area with black and white. 50% gray is an every-other checkerboard, 75% you color in half the remaining white squares, 25% you invert black and white, etc.

I don't think there's a fixed answer for this question without considering each image individually.


Threshold-based halftoning usually results in a lot of information loss. Depending on the purpose, you may want to consider dithering.

I like the look of the Stucki filter, since it's sharp and preserves detail. Here's a C# project that implements the algorithm. You could download the source if you were interested.