最简便的lightGBM GPU支持的安装、验证方法

匿名 (未验证) 提交于 2019-12-03 00:22:01

以下基于ubuntu 16.04 python 3.6.5安装测试成功

1、安装软件依赖

sudo apt-get install --no-install-recommends git cmake build-essential libboost-dev libboost-system-dev libboost-filesystem-dev

2、安装python库

pip install setuptools wheel numpy scipy scikit-learn -U

3、安装lightGBM-GPU

sudo pip3.6 install lightgbm --install-option=--gpu --install-option="--opencl-include-dir=/usr/local/cuda/include/" --install-option="--opencl-library=/usr/local/cuda/lib64/libOpenCL.so"

4、测试

先下载测试文件并且进行文件转化

git clone https://github.com/guolinke/boosting_tree_benchmarks.git cd boosting_tree_benchmarks/data wget "https://archive.ics.uci.edu/ml/machine-learning-databases/00280/HIGGS.csv.gz" gunzip HIGGS.csv.gz python higgs2libsvm.py

编写测试脚本

import lightgbm as lgb import time   params = {'max_bin': 63, 'num_leaves': 255, 'learning_rate': 0.1, 'tree_learner': 'serial', 'task': 'train', 'is_training_metric': 'false', 'min_data_in_leaf': 1, 'min_sum_hessian_in_leaf': 100, 'ndcg_eval_at': [1,3,5,10], 'sparse_threshold': 1.0, 'device': 'gpu', 'gpu_platform_id': 0, 'gpu_device_id': 0}   dtrain = lgb.Dataset('data/higgs.train') t0 = time.time() gbm = lgb.train(params, train_set=dtrain, num_boost_round=10,           valid_sets=None, valid_names=None,           fobj=None, feval=None, init_model=None,           feature_name='auto', categorical_feature='auto',           early_stopping_rounds=None, evals_result=None,           verbose_eval=True,           keep_training_booster=False, callbacks=None) t1 = time.time()  print('gpu version elapse time: {}'.format(t1-t0))   params = {'max_bin': 63, 'num_leaves': 255, 'learning_rate': 0.1, 'tree_learner': 'serial', 'task': 'train', 'is_training_metric': 'false', 'min_data_in_leaf': 1, 'min_sum_hessian_in_leaf': 100, 'ndcg_eval_at': [1,3,5,10], 'sparse_threshold': 1.0, 'device': 'cpu' }  t0 = time.time() gbm = lgb.train(params, train_set=dtrain, num_boost_round=10,           valid_sets=None, valid_names=None,           fobj=None, feval=None, init_model=None,           feature_name='auto', categorical_feature='auto',           early_stopping_rounds=None, evals_result=None,           verbose_eval=True,           keep_training_booster=False, callbacks=None) t1 = time.time()  print('cpu version elapse time: {}'.format(t1-t0))

测试结果如下,可见gpu版确实比cpu快

























标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!