feature-extraction

How to save feature value from histogram of LBP image in Matlab?

…衆ロ難τιáo~ 提交于 2019-12-02 12:24:55
I'm using Local Binary Pattern ( LBP ) to extract the features of group of images ( 500 images in Training folder and 100 Images in Test folder ). Indeed, I had extracted these features successfully but I'm not sure whether they saved in correct way or not. Here is a part of code that extract the features: for x = 1:total_images % Specify images names with full path and extension full_name= fullfile(test_set, filenames(x).name); % Read images from Training folder I2 = imread(full_name); I3=I2; m=size(I2,1); n=size(I2,2); for i=2:m-1 for j=2:n-1 c=I2(i,j); I3(i-1,j-1)=I2(i-1,j-1)>c; I3(i-1,j)

how to improve LBP operator by reducing feature dimension

大城市里の小女人 提交于 2019-12-02 10:14:09
I am using LBP with MATLAB for extraction feature but the accuracy is too low how to reduce the feature bins in LBP? many thanks. Use the pcares function to do that. pcares stands for PCA Residuals : [residuals, reconstructed] = pcares(X, ndim); residuals returns the residuals obtained by retaining ndim principal components of the n-by-p matrix X . X is the data matrix, or the matrix that contains your data. Rows of X correspond to observations and columns are the variables. ndim is a scalar and must be less than or equal to p . residuals is a matrix of the same size as X . reconstructed will

Extracting rows containing specific value using mapReduce and hadoop

别来无恙 提交于 2019-12-02 09:37:43
问题 I'm new to developing map-reduce function. Consider I have csv file containing four column data. For example: 101,87,65,67 102,43,45,40 103,23,56,34 104,65,55,40 105,87,96,40 Now, I want extract say 40 102 40 104 40 105 as those row contain 40 in forth column. How to write map reduce function? 回答1: Basically WordCount example resembles very well what you are trying to achieve. Instead of initializing the count per each word, you should have a condition to check if the tokenized String has

Why is this Deprication Warning halting code execution?

自作多情 提交于 2019-12-02 09:36:02
I tried to use the TfidifVectorizer and CountVectorizer from the Sci-Kit Learn package, but when I import them: from sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer I get the following warning message: /anaconda3/lib/python3.7/site-packages/sklearn/feature_extraction/text.py:17: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working from collections import Mapping, defaultdict My code stops running after this even though the message is just a warning, indicating that an error

Black line in GLCM result

最后都变了- 提交于 2019-12-02 05:21:20
It is the result of GLCM matrix. What is the meaning of black horizontal and vertical lines in GLCM image? Are they a problem? N = numel(unique(img)); % img is uint8 glcm = graycomatrix(img, 'NumLevels', N); imshow(glcm) I suspect this is the problem: For the function graycomatrix , You have supplied a 'NumLevels' argument which is larger than the number of unique graylevels in your image. For instance, a 256-level (8-bit) image will have only 256 graylevels. Asking for 1000 levels in the output means 744 levels will have no data! i.e. Yes, this is a problem. You can check how many graylevels

Extracting rows containing specific value using mapReduce and hadoop

流过昼夜 提交于 2019-12-02 03:26:17
I'm new to developing map-reduce function. Consider I have csv file containing four column data. For example: 101,87,65,67 102,43,45,40 103,23,56,34 104,65,55,40 105,87,96,40 Now, I want extract say 40 102 40 104 40 105 as those row contain 40 in forth column. How to write map reduce function? Serhiy Basically WordCount example resembles very well what you are trying to achieve. Instead of initializing the count per each word, you should have a condition to check if the tokenized String has required value and only in that case you write to context. This will work, since Mapper will receive

Confusion in different HOG codes

天大地大妈咪最大 提交于 2019-12-01 18:38:43
I have downloaded three different HoG codes. using the image of 64x128 1) using the matlab function: extractHOGFeatures , [hog, vis] = extractHOGFeatures(img,'CellSize',[8 8]); The size of hog is 3780. How to calculate: HOG feature length, N, is based on the image size and the function parameter values. N = prod([BlocksPerImage, BlockSize, NumBins]) BlocksPerImage = floor((size(I)./CellSize – BlockSize)./(BlockSize – BlockOverlap) + 1) 2) the second HOG function is downloaded from here . Same image is used H = hog( double(rgb2gray(img)), 8, 9 ); % I - [mxn] color or grayscale input image (must

Choosing/Normalizing HoG parameters for object detection?

廉价感情. 提交于 2019-11-30 23:47:36
I'm using HoG features for object detection via classification. I'm confused about how to deal with HoG feature vectors of different lengths. I've trained my classifier using training images that all have the same size. Now, I'm extracting regions from my image on which to run the classifier - say, using the sliding windows approach. Some of the windows that I extract are a lot bigger than the size of images the classifier was trained on. (It was trained on the smallest possible size of the object that might be expected in test images). The problem is, when the windows I need to classify are

Problems during Skeletonization image for extracting contours

南笙酒味 提交于 2019-11-30 23:06:54
I found this code to get a skeletonized image. I have a circle image ( https://docs.google.com/file/d/0ByS6Z5WRz-h2RXdzVGtXUTlPSGc/edit?usp=sharing ). img = cv2.imread(nomeimg,0) size = np.size(img) skel = np.zeros(img.shape,np.uint8) ret,img = cv2.threshold(img,127,255,0) element = cv2.getStructuringElement(cv2.MORPH_CROSS,(3,3)) done = False while( not done): eroded = cv2.erode(img,element) temp = cv2.dilate(eroded,element) temp = cv2.subtract(img,temp) skel = cv2.bitwise_or(skel,temp) img = eroded.copy() zeros = size - cv2.countNonZero(img) if zeros==size: done = True print("skel") print

Choosing/Normalizing HoG parameters for object detection?

喜夏-厌秋 提交于 2019-11-30 17:22:01
问题 I'm using HoG features for object detection via classification. I'm confused about how to deal with HoG feature vectors of different lengths. I've trained my classifier using training images that all have the same size. Now, I'm extracting regions from my image on which to run the classifier - say, using the sliding windows approach. Some of the windows that I extract are a lot bigger than the size of images the classifier was trained on. (It was trained on the smallest possible size of the