OpenCV: how to categorize GMM calculated probs

喜欢而已 提交于 2019-12-25 05:23:10

问题


I am using opencv EM algorithm to obtain GMM models with the help of example code in opencv documentation as follows:

cv::Mat capturedFrame
const int N = 5; 
int nsamples = 100;
cv::Mat samples ( nsamples, 2, CV_32FC1 );
samples = samples.reshape ( 2, 0 );
cv::Mat sample ( 1, 2, CV_32FC1 );
CvEM em_model;
CvEMParams params;

for ( i = 0; i < N; i++ )
{           
//from the training samples
cv::Mat samples_part = samples.rowRange ( i*nsamples/N, (i+1)*nsamples/N);
cv::Scalar mean (((i%N)+1)*img.rows/(N1+1),((i/N1)+1)*img.rows/(N1+1));
cv::Scalar sigma (30,30);
cv::randn(samples_part,mean,sigma);                     

}
samples = samples.reshape ( 1, 0 );
//initialize model parameters
params.covs         = NULL;
params.means        = NULL;
params.weights      = NULL;
params.probs        = NULL;
params.nclusters    = N;
params.cov_mat_type = CvEM::COV_MAT_SPHERICAL;
params.start_step   = CvEM::START_AUTO_STEP;
params.term_crit.max_iter = 300;
params.term_crit.epsilon  = 0.1;
params.term_crit.type   = CV_TERMCRIT_ITER|CV_TERMCRIT_EPS;     
//cluster the data
em_model.train ( samples, Mat(), params, &labels );

As being a fresh to GMM and openCV, now I have some questions:

Firstly, after performing above code, I can get the probs like:

cv::Mat probs = em_model.getProbs();

Then how can I get the models which are having the most and least elements, that is, the biggest and smallest models?

Secondly, my sample data is only 100 here, as in the example code of opencv, but I am reading a frame with size 600x800, and I want to sample all those pixels in it, which is 480000. But it takes about 10 ms for these 100 samples, that means it would be too much slow if I set:

int nsamples = 480000;

Am I on the right way here?


回答1:


If I get your question right, what you call your "biggest" and "smallest" models refers to the weights of each gaussian in the mixture. You can get the weights associated to the gaussians using EM::getWeights.

Concerning second question, if you train your model using 480000 samples instead of 100, yes, it will be definitely longer. Being "too slow" depends on your requirements. But EM is a classification model, so what is usually done is that you must train the model, using a sufficient amount of sample. This is a long process, but usually done "offline". Then, you can use the model to "predict" new samples, i.e. get the probabilities associated with new input samples. When you call getProbs() function, you get the probabilities associated with your training samples. If you want to get probabilities for unknown samples, typically pixels in your video frame, call the function predict.



来源:https://stackoverflow.com/questions/12909343/opencv-how-to-categorize-gmm-calculated-probs

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!