precision-recall

Precision/recall for multiclass-multilabel classification

喜夏-厌秋 提交于 2019-11-29 22:58:30
I'm wondering how to calculate precision and recall measures for multiclass multilabel classification, i.e. classification where there are more than two labels, and where each instance can have multiple labels? For multi-label classification you have two ways to go First consider the following. is the number of examples. is the ground truth label assignment of the example.. is the example. is the predicted labels for the example. Example based The metrics are computed in a per datapoint manner. For each predicted label its only its score is computed, and then these scores are aggregated over

How to interpret almost perfect accuracy and AUC-ROC but zero f1-score, precision and recall

半城伤御伤魂 提交于 2019-11-29 20:12:37
I am training ML logistic classifier to classify two classes using python scikit-learn. They are in an extremely imbalanced data (about 14300:1). I'm getting almost 100% accuracy and ROC-AUC, but 0% in precision, recall, and f1 score. I understand that accuracy is usually not useful in very imbalanced data, but why is the ROC-AUC measure is close to perfect as well? from sklearn.metrics import roc_curve, auc # Get ROC y_score = classifierUsed2.decision_function(X_test) false_positive_rate, true_positive_rate, thresholds = roc_curve(y_test, y_score) roc_auc = auc(false_positive_rate, true

How to calculate precision and recall in Keras

和自甴很熟 提交于 2019-11-28 17:27:25
I am building a multi-class classifier with Keras 2.02 (with Tensorflow backend),and I do not know how to calculate precision and recall in Keras. Please help me. Yasha Bubnov Python package keras-metrics could be useful for this (I'm the package's author). import keras import keras_metrics model = models.Sequential() model.add(keras.layers.Dense(1, activation="sigmoid", input_dim=2)) model.add(keras.layers.Dense(1, activation="softmax")) model.compile(optimizer="sgd", loss="binary_crossentropy", metrics=[keras_metrics.precision(), keras_metrics.recall()]) As of Keras 2.0, precision and recall

Custom macro for recall in keras

北城余情 提交于 2019-11-28 05:24:51
问题 I am trying to create a custom macro for recall = (recall of class1 + recall of class2)/2 . I came up with the following code but I am not sure how to calculate the true positive of class 0. def unweightedRecall(): def recall(y_true, y_pred): # recall of class 1 true_positives1 = K.sum(K.round(K.clip(y_pred * y_true, 0, 1))) possible_positives1 = K.sum(K.round(K.clip(y_true, 0, 1))) recall1 = true_positives1 / (possible_positives1 + K.epsilon()) # --- get true positive of class 0 in true

How to get precision, recall and f-measure from confusion matrix in Python

不羁岁月 提交于 2019-11-28 00:26:30
I'm using Python and have some confusion matrixes. I'd like to calculate precisions and recalls and f-measure by confusion matrixes in multiclass classification. My result logs don't contain y_true and y_pred , just contain confusion matrix. Could you tell me how to get these scores from confusion matrix in multiclass classification? Let's consider the case of MNIST data classification (10 classes), where for a test set of 10,000 samples we get the following confusion matrix cm (Numpy array): array([[ 963, 0, 0, 1, 0, 2, 11, 1, 2, 0], [ 0, 1119, 3, 2, 1, 0, 4, 1, 4, 1], [ 12, 3, 972, 9, 6, 0,

How to calculate precision and recall in Keras

岁酱吖の 提交于 2019-11-27 10:28:34
问题 I am building a multi-class classifier with Keras 2.02 (with Tensorflow backend),and I do not know how to calculate precision and recall in Keras. Please help me. 回答1: Python package keras-metrics could be useful for this (I'm the package's author). import keras import keras_metrics model = models.Sequential() model.add(keras.layers.Dense(1, activation="sigmoid", input_dim=2)) model.add(keras.layers.Dense(1, activation="softmax")) model.compile(optimizer="sgd", loss="binary_crossentropy",

F1 Score vs ROC AUC

旧街凉风 提交于 2019-11-27 09:20:09
问题 I have the below F1 and AUC scores for 2 different cases Model 1: Precision: 85.11 Recall: 99.04 F1: 91.55 AUC: 69.94 Model 2: Precision: 85.1 Recall: 98.73 F1: 91.41 AUC: 71.69 The main motive of my problem to predict the positive cases correctly,ie, reduce the False Negative cases (FN). Should I use F1 score and choose Model 1 or use AUC and choose Model 2. Thanks 回答1: Introduction As a rule of thumb, every time you want to compare ROC AUC vs F1 Score , think about it as if you are