feature-extraction

Neighboring gray-level dependence matrix (NGLDM) in MATLAB

馋奶兔 提交于 2019-11-30 15:50:20
I would like to calculate a couple of texture features (namely: small/ large number emphasis, number non-uniformity, second moment and entropy). Those can be computed from Neighboring gray-level dependence matrix. I'm struggling with understanding/implementation of this. There is very little info on this method (publicly available). According to this paper : This matrix takes the form of a two-dimensional array Q , where Q(i,j) can be considered as frequency counts of grayness variation of a processed image. It has a similar meaning as histogram of an image. This array is N g ×N r where N g is

How to use SIFT/SURF as features for a machine learning algorithm?

无人久伴 提交于 2019-11-30 07:14:13
Im working on an automatic image annotation problem in which im trying to associate tags with images. For that im trying for SIFT features for learning. But the problem is all the SIFT features are a set of keypoints, each of which have a 2-D array, and the number of keypoints are also huge.How many and how do I give them for my learning algorithm which typically accepts only one-d features? You can represent single SIFT as "visual word" which is one number and use it as SVM input, I think it is what you need. It is usually done by k-means clustering. This method is called "bag-of-words" and

Combining feature extraction classes in scikit-learn

六眼飞鱼酱① 提交于 2019-11-30 06:40:45
I'm using sklearn.pipeline.Pipeline to chain feature extractors and a classifier. Is there a way to combine multiple feature selection classes (for example the ones from sklearn.feature_selection.text ) in parallel and join their output? My code right now looks as follows: pipeline = Pipeline([ ('vect', CountVectorizer()), ('tfidf', TfidfTransformer()), ('clf', SGDClassifier())]) It results in the following: vect -> tfidf -> clf I want to be able to specify a pipeline that looks as follows: vect1 -> tfidf1 \ -> clf vect2 -> tfidf2 / This has been implemented recently in the master branch of

Neighboring gray-level dependence matrix (NGLDM) in MATLAB

三世轮回 提交于 2019-11-29 22:46:04
问题 I would like to calculate a couple of texture features (namely: small/ large number emphasis, number non-uniformity, second moment and entropy). Those can be computed from Neighboring gray-level dependence matrix. I'm struggling with understanding/implementation of this. There is very little info on this method (publicly available). According to this paper: This matrix takes the form of a two-dimensional array Q , where Q(i,j) can be considered as frequency counts of grayness variation of a

Feature Selection and Reduction for Text Classification

*爱你&永不变心* 提交于 2019-11-29 18:41:46
I am currently working on a project, a simple sentiment analyzer such that there will be 2 and 3 classes in separate cases . I am using a corpus that is pretty rich in the means of unique words (around 200.000). I used bag-of-words method for feature selection and to reduce the number of unique features , an elimination is done due to a threshold value of frequency of occurrence . The final set of features includes around 20.000 features, which is actually a 90% decrease , but not enough for intended accuracy of test-prediction. I am using LibSVM and SVM-light in turn for training and

Getting feature names from within a FeatureUnion + Pipeline

五迷三道 提交于 2019-11-29 16:56:33
问题 I am using a FeatureUnion to join features found from the title and description of events: union = FeatureUnion( transformer_list=[ # Pipeline for pulling features from the event's title ('title', Pipeline([ ('selector', TextSelector(key='title')), ('count', CountVectorizer(stop_words='english')), ])), # Pipeline for standard bag-of-words model for description ('description', Pipeline([ ('selector', TextSelector(key='description_snippet')), ('count', TfidfVectorizer(stop_words='english')), ])

How to use SIFT/SURF as features for a machine learning algorithm?

筅森魡賤 提交于 2019-11-29 09:28:36
问题 Im working on an automatic image annotation problem in which im trying to associate tags with images. For that im trying for SIFT features for learning. But the problem is all the SIFT features are a set of keypoints, each of which have a 2-D array, and the number of keypoints are also huge.How many and how do I give them for my learning algorithm which typically accepts only one-d features? 回答1: You can represent single SIFT as "visual word" which is one number and use it as SVM input, I

Combining feature extraction classes in scikit-learn

a 夏天 提交于 2019-11-29 07:40:19
问题 I'm using sklearn.pipeline.Pipeline to chain feature extractors and a classifier. Is there a way to combine multiple feature selection classes (for example the ones from sklearn.feature_selection.text ) in parallel and join their output? My code right now looks as follows: pipeline = Pipeline([ ('vect', CountVectorizer()), ('tfidf', TfidfTransformer()), ('clf', SGDClassifier())]) It results in the following: vect -> tfidf -> clf I want to be able to specify a pipeline that looks as follows:

OpenCV - Detect hand-drawing shapes

丶灬走出姿态 提交于 2019-11-28 22:08:11
Could OpenCV detect the geometric shapes which is drawn by hand as below? The shape can be a rectangle, triangle, circle, curve, arc,polygon,... I am going to develop an android application which detect these shapes. Well, I tried it in a harry. Normally you need to skeletonize the input. Anyway. You can reason about the shapes based on their points. Normally a square has 4, a triangle 3, etc. Effort results: Canny results: Polygonal approximation: Console output: contour points:11 contour points:6 contour points:4 contour points:5 Here is the code: Mat src=imread("WyoKM.png"); Mat src_gray

Convolutional Neural Network (CNN) for Audio [closed]

雨燕双飞 提交于 2019-11-28 15:48:50
I have been following the tutorials on DeepLearning.net to learn how to implement a convolutional neural network that extracts features from images. The tutorial are well explained, easy to understand and follow. I want to extend the same CNN to extract multi-modal features from videos (images + audio) at the same time. I understand that video input is nothing but a sequence of images (pixel intensities) displayed in a period of time (ex. 30 FPS) associated with audio. However, I don't really understand what audio is, how it works, or how it is broken down to be feed into the network. I have