XGBoost difference in train and test features after converting to DMatrix

后端 未结 3 1182
我寻月下人不归
我寻月下人不归 2021-01-03 05:19

Just wondering how is possible next case:

 def fit(self, train, target):
     xgtrain = xgb.DMatrix(train, label=target, missing=np.nan)
     self.model = xg         


        
3条回答
  •  日久生厌
    2021-01-03 05:59

    One another possibility is to have one feature level exclusively in training data not in testing data. This situation happens mostly while post one hot encoding whose resultant is big matrix have level for each level of categorical features. In your case it looks like "f5232" is either exclusive in training or test data. If either case model scoring likely to throw error (in most implementations of ML packages) because:

    1. If exclusive to training: Model object will have reference of this feature in model equation. While scoring it will throw error saying I am not able to find this column.
    2. If exclusive to test (lesser likely as test data is usually smaller than training data): Model object will NOT have reference of this feature in model equation. While scoring it will throw error saying I got this column but model equation don't have this column. This is also lesser likely because most implementations are cognizant of this case.

    Solutions:

    1. The best "automated" solution is to keep only those columns, which are common to both training and test post one hot encoding.
    2. For adhoc analysis if you can not afford to drop the level of feature because of its importance then do stratified sampling to ensure that all level of feature gets distributed to training and test data.

提交回复
热议问题