Simplest feature selection algorithm

放肆的年华 提交于 2019-12-12 08:41:40

问题


I am trying to create my own and simple feature selection algorithm. The data set that I am going to work with is here (very famous data set). Can someone give me a pointer on how to do so?

I am planning to write a feature rank algorithm for a text classification. This is for a sentiment analysis of movie reviews, classifying them as either positive or negative.

So my question is on how to write a simple feature selection for a text data set.


回答1:


Feature selection methods are a big topic. You can start with following:

  1. Chi square

  2. Mutual information

  3. Term frequency

etc. Read this paper if you have time: Comparative study on feature selection in text categorization this will help you lot.

The actual implementation depends on how you pre-process the data. Basically its keeping the counts, be it hash table or a database.




回答2:


Random features work well, when you are then building ensembles. It's known as feature bagging.




回答3:


Here's one option: Use pointwise mutual information. Your features will be tokens, and the information should be measured against the sentiment label. Be careful with frequent words (stop words), because in this type of task they may actually be useful.




回答4:


I currently use this approach:

calculate mean value and variance of data for each class. A good feature candidate should have small variance and the mean value should be different from mean values of other classes.

Currently having only < 50 features I select them manually. For automation of this process one could calculate variances of average values among all classes and give the higher prioritization to those, having bigger variance. Then, select first those, having smaller variance within one class.

Of cause this doesn't removes redundant features.



来源:https://stackoverflow.com/questions/5222731/simplest-feature-selection-algorithm

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!