auc

How to calculate ROC_AUC score having 3 classes

帅比萌擦擦* 提交于 2021-02-11 13:30:04
问题 I have a data having 3 class labels(0,1,2). I tried to make ROC curve. and did it by using pos_label parameter. fpr, tpr, thresholds = metrics.roc_curve(Ytest, y_pred_prob, pos_label = 0) By changing pos_label to 0,1,2- I get 3 graphs, Now I am having issue in calculating AUC score. How can I average the 3 graphs and plot 1 graph from it and then calculate the Roc_AUC score. i am having error in by this metrics.roc_auc_score(Ytest, y_pred_prob) ValueError: multiclass format is not supported

plot Roc curve using keras

左心房为你撑大大i 提交于 2021-02-10 20:51:57
问题 I have a neural network model and I am using KerasClassifier and then using KFold for cross-validation. Now I am having issues in plotting the ROC curve. I have tried few codes but most of them is giving me an error of multi-labeled is not interpreted. I have the following code till my neural network produces the accuracy. I will be thankful if anyone can help me with the later part of the code. import numpy as np import pandas as pd from keras.layers import Dense, Input from keras.models

plot Roc curve using keras

旧街凉风 提交于 2021-02-10 20:46:23
问题 I have a neural network model and I am using KerasClassifier and then using KFold for cross-validation. Now I am having issues in plotting the ROC curve. I have tried few codes but most of them is giving me an error of multi-labeled is not interpreted. I have the following code till my neural network produces the accuracy. I will be thankful if anyone can help me with the later part of the code. import numpy as np import pandas as pd from keras.layers import Dense, Input from keras.models

Computing AUC and ROC curve from multi-class data in scikit-learn (sklearn)?

冷暖自知 提交于 2021-02-07 14:25:18
问题 I am trying to use the scikit-learn module to compute AUC and plot ROC curves for the output of three different classifiers to compare their performance. I am very new to this topic, and I am struggling to understand how the data I have should input to the roc_curve and auc functions. For each item within the testing set, I have the true value and the output of each of the three classifiers. The classes are ['N', 'L', 'W', 'T'] . In addition, I have a confidence score for each value output

Why roc_auc produces weird results in sklearn?

半世苍凉 提交于 2021-01-29 05:44:56
问题 I have a binary classification problem where I use the following code to get my weighted avarege precision , weighted avarege recall , weighted avarege f-measure and roc_auc . df = pd.read_csv(input_path+input_file) X = df[features] y = df[["gold_standard"]] clf = RandomForestClassifier(random_state = 42, class_weight="balanced") k_fold = StratifiedKFold(n_splits=10, shuffle=True, random_state=0) scores = cross_validate(clf, X, y, cv=k_fold, scoring = ('accuracy', 'precision_weighted',