No speedup using XGBClassifier with GPU support

删除回忆录丶 提交于 2021-02-19 08:19:06

问题


In the following code, I try to search over different hyper-parameters of xgboost.

param_test1 = {
 'max_depth':list(range(3,10,2)),
 'min_child_weight':list(range(1,6,2))
}
predictors = [x for x in train_data.columns if x not in ['target', 'id']]
gsearch1 = GridSearchCV(estimator=XGBClassifier(learning_rate =0.1, n_estimators=100, max_depth=5,
                                                min_child_weight=1, gamma=0, subsample=0.8, colsample_bytree=0.8,
                                                objective= 'binary:logistic', n_jobs=4, scale_pos_weight=1, seed=27, 
                                                kvargs={'tree_method':'gpu_hist'}),
                    param_grid=param_test1, scoring='roc_auc', n_jobs=4, iid=False, cv=5, verbose=2)
gsearch1.fit(train_data[predictors], train_data['target'])

Even though I use kvargs={tree_method':'gpu_hist'}, I get no speedup in the implementation. According to the nvidia-smi, the GPU is not much involved in the computation:

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 375.66                 Driver Version: 375.66                    |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GTX 1080    Off  | 0000:01:00.0      On |                  N/A |
|  0%   39C    P8    10W / 200W |    338MiB /  8112MiB |      1%      Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID  Type  Process name                               Usage      |
|=============================================================================|
|    0       961    G   /usr/lib/xorg/Xorg                             210MiB |
|    0      1675    G   compiz                                         124MiB |
|    0      2359    G   /usr/lib/firefox/firefox                         2MiB |
+-----------------------------------------------------------------------------+

I have installed the GPU supported xgboost using the following commands in Ubuntu:

$ git clone --recursive https://github.com/dmlc/xgboost
$ mkdir build
$ cd build
$ cmake .. -DUSE_CUDA=ON
$ make -j

What is the possible reason?


回答1:


I'd like to revise two things. The installation of xgboost in Ubuntu,

make -j4

And as for Vivek's opinion, I'd like you to check the 'tree_method' parameter withe respect to the below.

http://xgboost.readthedocs.io/en/latest/parameter.html




回答2:


try to add the single parameter instead: updater='grow_gpu'




回答3:


I know its a bit late, but still, If the installation of cuda is done correctly, the following code should work:

Without GridSearch:

import xgboost

xgb = xgboost.XGBClassifier(n_estimators=200, tree_method='gpu_hist', predictor='gpu_predictor')
xgb.fit(X_train, y_train)

With GridSearch:

params = {
        'max_depth': [3,4,5,6,7,8,10],
        'learning_rate':[0.001, 0.003, 0.01,0.03, 0.1,0.3],
        'n_estimators':[50,100,200,300,500,1000],
        .... whatever ....
}
xgb = xgboost.XGBClassifier(tree_method='gpu_hist', predictor='gpu_predictor')
tuner = GridSearchCV(xgb, params=params)
tuner.fit(X_train, y_train)

# OR you can pass them in params also.


来源:https://stackoverflow.com/questions/46842708/no-speedup-using-xgbclassifier-with-gpu-support

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!