How to reduce the number of colors in an image with OpenCV?

我的梦境 提交于 2019-11-26 07:35:54

问题


I have a set of image files, and I want to reduce the number of colors of them to 64. How can I do this with OpenCV?

I need this so I can work with a 64-sized image histogram. I\'m implementing CBIR techniques

What I want is color quantization to a 4-bit palette.


回答1:


There are many ways to do it. The methods suggested by jeff7 are OK, but some drawbacks are:

  • method 1 have parameters N and M, that you must choose, and you must also convert it to another colorspace.
  • method 2 answered can be very slow, since you should compute a 16.7 Milion bins histogram and sort it by frequency (to obtain the 64 higher frequency values)

I like to use an algorithm based on the Most Significant Bits to use in a RGB color and convert it to a 64 color image. If you're using C/OpenCV, you can use something like the function below.

If you're working with gray-level images I recommed to use the LUT() function of the OpenCV 2.3, since it is faster. There is a tutorial on how to use LUT to reduce the number of colors. See: Tutorial: How to scan images, lookup tables... However I find it more complicated if you're working with RGB images.

void reduceTo64Colors(IplImage *img, IplImage *img_quant) {
    int i,j;
    int height   = img->height;   
    int width    = img->width;    
    int step     = img->widthStep;

    uchar *data = (uchar *)img->imageData;
    int step2 = img_quant->widthStep;
    uchar *data2 = (uchar *)img_quant->imageData;

    for (i = 0; i < height ; i++)  {
        for (j = 0; j < width; j++)  {

          // operator XXXXXXXX & 11000000 equivalent to  XXXXXXXX AND 11000000 (=192)
          // operator 01000000 >> 2 is a 2-bit shift to the right = 00010000 
          uchar C1 = (data[i*step+j*3+0] & 192)>>2;
          uchar C2 = (data[i*step+j*3+1] & 192)>>4;
          uchar C3 = (data[i*step+j*3+2] & 192)>>6;

          data2[i*step2+j] = C1 | C2 | C3; // merges the 2 MSB of each channel
        }     
    }
}



回答2:


This subject was well covered on OpenCV 2 Computer Vision Application Programming Cookbook:

Chapter 2 shows a few reduction operations, one of them demonstrated here in C++:

#include <iostream>
#include <vector>

#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>


void colorReduce(cv::Mat& image, int div=64)
{    
    int nl = image.rows;                    // number of lines
    int nc = image.cols * image.channels(); // number of elements per line

    for (int j = 0; j < nl; j++)
    {
        // get the address of row j
        uchar* data = image.ptr<uchar>(j);

        for (int i = 0; i < nc; i++)
        {
            // process each pixel
            data[i] = data[i] / div * div + div / 2;
        }
    }
}

int main(int argc, char* argv[])
{   
    // Load input image (colored, 3-channel, BGR)
    cv::Mat input = cv::imread(argv[1]);
    if (input.empty())
    {
        std::cout << "!!! Failed imread()" << std::endl;
        return -1;
    } 

    colorReduce(input);

    cv::imshow("Color Reduction", input);   
    cv::imwrite("output.jpg", input);   
    cv::waitKey(0);

    return 0;
}

Below you can find the input image (left) and the output of this operation (right):




回答3:


You might consider K-means, yet in this case it will most likely be extremely slow. A better approach might be doing this "manually" on your own. Let's say you have image of type CV_8UC3, i.e. an image where each pixel is represented by 3 RGB values from 0 to 255 (Vec3b). You might "map" these 256 values to only 4 specific values, which would yield 4 x 4 x 4 = 64 possible colors.

I've had a dataset, where I needed to make sure that dark = black, light = white and reduce the amount of colors of everything between. This is what I did (C++):

inline uchar reduceVal(const uchar val)
{
    if (val < 64) return 0;
    if (val < 128) return 64;
    return 255;
}

void processColors(Mat& img)
{
    uchar* pixelPtr = img.data;
    for (int i = 0; i < img.rows; i++)
    {
        for (int j = 0; j < img.cols; j++)
        {
            const int pi = i*img.cols*3 + j*3;
            pixelPtr[pi + 0] = reduceVal(pixelPtr[pi + 0]); // B
            pixelPtr[pi + 1] = reduceVal(pixelPtr[pi + 1]); // G
            pixelPtr[pi + 2] = reduceVal(pixelPtr[pi + 2]); // R
        }
    }
}

causing [0,64) to become 0, [64,128) -> 64 and [128,255) -> 255, yielding 27 colors:

To me this seems to be neat, perfectly clear and faster than anything else mentioned in other answers.

You might also consider reducing these values to one of the multiples of some number, let's say:

inline uchar reduceVal(const uchar val)
{
    if (val < 192) return uchar(val / 64.0 + 0.5) * 64;
    return 255;
}

which would yield a set of 5 possible values: {0, 64, 128, 192, 255}, i.e. 125 colors.




回答4:


The answers suggested here are really good. I thought I would add my idea as well. I follow the formulation of many comments here, in which it is said that 64 colors can be represented by 2 bits of each channel in an RGB image.

The function in code below takes as input an image and the number of bits required for quantization. It uses bit manipulation to 'drop' the LSB bits and keep only the required number of bits. The result is a flexible method that can quantize the image to any number of bits.

#include "include\opencv\cv.h"
#include "include\opencv\highgui.h"

// quantize the image to numBits 
cv::Mat quantizeImage(const cv::Mat& inImage, int numBits)
{
    cv::Mat retImage = inImage.clone();

    uchar maskBit = 0xFF;

    // keep numBits as 1 and (8 - numBits) would be all 0 towards the right
    maskBit = maskBit << (8 - numBits);

    for(int j = 0; j < retImage.rows; j++)
        for(int i = 0; i < retImage.cols; i++)
        {
            cv::Vec3b valVec = retImage.at<cv::Vec3b>(j, i);
            valVec[0] = valVec[0] & maskBit;
            valVec[1] = valVec[1] & maskBit;
            valVec[2] = valVec[2] & maskBit;
            retImage.at<cv::Vec3b>(j, i) = valVec;
        }

        return retImage;
}


int main ()
{
    cv::Mat inImage;
    inImage = cv::imread("testImage.jpg");
    char buffer[30];
    for(int i = 1; i <= 8; i++)
    {
        cv::Mat quantizedImage = quantizeImage(inImage, i);
        sprintf(buffer, "%d Bit Image", i);
        cv::imshow(buffer, quantizedImage);

        sprintf(buffer, "%d Bit Image.png", i);
        cv::imwrite(buffer, quantizedImage);
    }

    cv::waitKey(0);
    return 0;
}

Here is an image that is used in the above function call:

Image quantized to 2 bits for each RGB channel (Total 64 Colors):

3 bits for each channel:

4 bits ...




回答5:


There is the K-means clustering algorithm which is already available in the OpenCV library. In short it determines which are the best centroids around which to cluster your data for a user-defined value of k ( = no of clusters). So in your case you could find the centroids around which to cluster your pixel values for a given value of k=64. The details are there if you google around. Here's a short intro to k-means.

Something similar to what you are probably trying was asked here on SO using k-means, hope it helps.

Another approach would be to use the pyramid mean shift filter function in OpenCV. It yields somewhat "flattened" images, i.e. the number of colors are less so it might be able to help you.




回答6:


Assuming that you want to use the same 64 colors for all images (ie palette not optimized per image), there are a at least a couple choices I can think of:

1) Convert to Lab or YCrCb colorspace and quantize using N bits for luminance and M bits for each color channel, N should be greater than M.

2) Compute a 3D histogram of color values over all your training images, then choose the 64 colors with the largest bin values. Quantize your images by assigning each pixel the color of the closest bin from the training set.

Method 1 is the most generic and easiest to implement, while method 2 can be better tailored to your specific dataset.

Update: For example, 32 colors is 5 bits so assign 3 bits to the luminance channel and 1 bits to each color channel. To do this quantization, do integer division of the luminance channel by 2^8/2^3 = 32 and each color channel by 2^8/2^1 = 128. Now there are only 8 different luminance values and 2 different color channels each. Recombine these values into a single integer doing bit shifting or math (quantized color value = luminance*4+color1*2+color2);




回答7:


Why don't you just do Matrix multiplication/division? Values will be automatically rounded.

Pseudocode:

convert your channels to unsigned characters (CV_8UC3),
Divide by total colors / desired colors. Mat = Mat / (256/64). Decimal points will be truncated.
Multiply by the same number. Mat = mat * 4

Done. Each channel now only contains 64 colors.



来源:https://stackoverflow.com/questions/5906693/how-to-reduce-the-number-of-colors-in-an-image-with-opencv

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!