Why am I getting good accuracy but low ROC AUC for multiple models?

别说谁变了你拦得住时间么 提交于 2020-04-18 05:39:08

问题


My dataset size is 42542 x 14 and I am trying to build different models like logistic regression, KNN, RF, Decision trees and compare the accuracies.

I get a high accuracy but low ROC AUC for every model.

The data has about 85% samples with target variable = 1 and 15% with target variable 0. I tried taking samples in order to handle this imbalance, but it still gives the same results.

Coeffs for glm are as follow:

glm(formula = loan_status ~ ., family = "binomial", data = lc_train)

Deviance Residuals: 
    Min       1Q   Median       3Q      Max  
-2.7617   0.3131   0.4664   0.6129   1.6734  

Coefficients:
                                     Estimate Std. Error z value Pr(>|z|)    
(Intercept)                        -8.264e+00  8.338e-01  -9.911  < 2e-16 ***
annual_inc                          5.518e-01  3.748e-02  14.721  < 2e-16 ***
home_own                            4.938e-02  3.740e-02   1.320 0.186780    
inq_last_6mths1                    -2.094e-01  4.241e-02  -4.938 7.88e-07 ***
inq_last_6mths2-5                  -3.805e-01  4.187e-02  -9.087  < 2e-16 ***
inq_last_6mths6-10                 -9.993e-01  1.065e-01  -9.380  < 2e-16 ***
inq_last_6mths11-15                -1.448e+00  3.510e-01  -4.126 3.68e-05 ***
inq_last_6mths16-20                -2.323e+00  7.946e-01  -2.924 0.003457 ** 
inq_last_6mths21-25                -1.399e+01  1.970e+02  -0.071 0.943394    
inq_last_6mths26-30                 1.039e+01  1.384e+02   0.075 0.940161    
inq_last_6mths31-35                -1.973e+00  1.230e+00  -1.604 0.108767    
loan_amnt                          -1.838e-05  3.242e-06  -5.669 1.43e-08 ***
purposecredit_card                  3.286e-02  1.130e-01   0.291 0.771169    
purposedebt_consolidation          -1.406e-01  1.032e-01  -1.362 0.173108    
purposeeducational                 -3.591e-01  1.819e-01  -1.974 0.048350 *  
purposehome_improvement            -2.106e-01  1.189e-01  -1.771 0.076577 .  
purposehouse                       -3.327e-01  1.917e-01  -1.735 0.082718 .  
purposemajor_purchase              -7.310e-03  1.288e-01  -0.057 0.954732    
purposemedical                     -4.955e-01  1.530e-01  -3.238 0.001203 ** 
purposemoving                      -4.352e-01  1.636e-01  -2.661 0.007800 ** 
purposeother                       -3.858e-01  1.105e-01  -3.493 0.000478 ***
purposerenewable_energy            -8.150e-01  3.036e-01  -2.685 0.007263 ** 
purposesmall_business              -9.715e-01  1.186e-01  -8.191 2.60e-16 ***
purposevacation                    -4.169e-01  2.012e-01  -2.072 0.038294 *  
purposewedding                      3.909e-02  1.557e-01   0.251 0.801751    
open_acc                           -1.408e-04  4.147e-03  -0.034 0.972923    
gradeB                             -4.377e-01  6.991e-02  -6.261 3.83e-10 ***
gradeC                             -5.858e-01  8.340e-02  -7.024 2.15e-12 ***
gradeD                             -7.636e-01  9.558e-02  -7.990 1.35e-15 ***
gradeE                             -7.832e-01  1.115e-01  -7.026 2.13e-12 ***
gradeF                             -9.730e-01  1.325e-01  -7.341 2.11e-13 ***
gradeG                             -1.031e+00  1.632e-01  -6.318 2.65e-10 ***
verification_statusSource Verified  6.340e-02  4.435e-02   1.429 0.152898    
verification_statusVerified         6.864e-02  4.400e-02   1.560 0.118739    
dti                                -4.683e-03  2.791e-03  -1.678 0.093373 .  
fico_range_low                      6.705e-03  9.292e-04   7.216 5.34e-13 ***
term                                5.773e-01  4.499e-02  12.833  < 2e-16 ***
emp_length2-4 years                 6.341e-02  4.911e-02   1.291 0.196664    
emp_length5-9 years                -3.136e-02  5.135e-02  -0.611 0.541355    
emp_length10+ years                -2.538e-01  5.185e-02  -4.895 9.82e-07 ***
delinq_2yrs2+                       5.919e-02  9.701e-02   0.610 0.541754    
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

(Dispersion parameter for binomial family taken to be 1)

    Null deviance: 25339  on 29779  degrees of freedom
Residual deviance: 23265  on 29739  degrees of freedom
AIC: 23347

Number of Fisher Scoring iterations: 10

The confusion matrix for LR is as below:

Confusion Matrix and Statistics

          Reference
Prediction     0     1
         0    32    40
         1  1902 10788

               Accuracy : 0.8478         
                 95% CI : (0.8415, 0.854)
    No Information Rate : 0.8485         
    P-Value [Acc > NIR] : 0.5842         

                  Kappa : 0.0213         

 Mcnemar's Test P-Value : <2e-16         

            Sensitivity : 0.016546       
            Specificity : 0.996306       
         Pos Pred Value : 0.444444       
         Neg Pred Value : 0.850118       
             Prevalence : 0.151544       
         Detection Rate : 0.002507       
   Detection Prevalence : 0.005642       
      Balanced Accuracy : 0.506426       

       'Positive' Class : 0    

Is there any way I can improve the AUC?


回答1:


If someone presents a confusion matrix and talks about low ROC AUC, it usually means that he/she has converted predictions/probabilities into 0 and 1, while ROC AUC formula does not require that - it works on raw probabilities, what gives much better results. If the aim is to obtain the best AUC value, it is good to set it as an evaluation metric while training, which enables to obtain better results than with other metrics.



来源:https://stackoverflow.com/questions/59398046/why-am-i-getting-good-accuracy-but-low-roc-auc-for-multiple-models

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!