问题
I am trying to implement a custom objective function using XGboost (in R but I also use python so any feedback about python is also good).
I created a function that spit back gradient and hessian (it works properly), but when I try to run xgb.train then it is not working. I then decided to print for each round the predictions, gradient and hessian in this specific order. This is the output (it keeps repeating as long as I let it run):
[1] 0 0 0 0 0 0 0 0 0 0
[1] -0.034106908 -0.017049339 -0.034106908 -0.034106908 -0.034106908 -0.034106908 -0.034106908 -0.004256162 -0.034106908 -0.008520554
[1] 0.003836107 0.004272548 0.003836107 0.003836107 0.003836107 0.003836107 0.003836107 0.004408935 0.003836107 0.004381658
[0] train-score:0 val-score:0
[1] 0 0 0 0 0 0 0 0 0 0
[1] -0.034106908 -0.017049339 -0.034106908 -0.034106908 -0.034106908 -0.034106908 -0.034106908 -0.004256162 -0.034106908 -0.008520554
[1] 0.003836107 0.004272548 0.003836107 0.003836107 0.003836107 0.003836107 0.003836107 0.004408935 0.003836107 0.004381658
[1] train-score:0 val-score:0
We can see that even if gradient and hessian seems ok, the predictions at each round does not change !! I don't understand why is that the case. If anybody ran into the same problem or have an idea please share.
The code I use is the following but I don't think it is very helpful:
reg <- xgb.train(data = xgb.DMatrix(data.matrix(train[1:10,feature.names]),label=train$Response[1:10]),
nrounds = 1000,
obj = custom_obj,
feval = evalerror,
early.stop.round = 20,
maximize = TRUE,
watchlist = list(train = xgb.DMatrix(data.matrix(train[1:10,feature.names]),label=train$Response[1:10]),
val = xgb.DMatrix(data.matrix(cv[,feature.names]),label=cv$Response)),
param = list(eta = 0.5,
max_depth = 10,
colsample_bytree=0.7,
min_child_weight=50,
subsample=0.7,
base_score = 4))
来源:https://stackoverflow.com/questions/34840960/implement-xgboost-custom-objective-function