How to write a confusion matrix in Python?

前端 未结 14 1928
太阳男子
太阳男子 2020-12-04 06:48

I wrote a confusion matrix calculation code in Python:

def conf_mat(prob_arr, input_arr):
        # confusion matrix
        conf_arr = [[0, 0], [0, 0]]

            


        
相关标签:
14条回答
  • 2020-12-04 07:17

    In a general sense, you're going to need to change your probability array. Instead of having one number for each instance and classifying based on whether or not it is greater than 0.5, you're going to need a list of scores (one for each class), then take the largest of the scores as the class that was chosen (a.k.a. argmax).

    You could use a dictionary to hold the probabilities for each classification:

    prob_arr = [{classification_id: probability}, ...]
    

    Choosing a classification would be something like:

    for instance_scores in prob_arr :
        predicted_classes = [cls for (cls, score) in instance_scores.iteritems() if score = max(instance_scores.values())]
    

    This handles the case where two classes have the same scores. You can get one score, by choosing the first one in that list, but how you handle that depends on what you're classifying.

    Once you have your list of predicted classes and a list of expected classes you can use code like Torsten Marek's to create the confusion array and calculate the accuracy.

    0 讨论(0)
  • 2020-12-04 07:17

    I wrote a simple class to build a confusion matrix without the need to depend on a machine learning library.

    The class can be used such as:

    labels = ["cat", "dog", "velociraptor", "kraken", "pony"]
    confusionMatrix = ConfusionMatrix(labels)
    
    confusionMatrix.update("cat", "cat")
    confusionMatrix.update("cat", "dog")
    ...
    confusionMatrix.update("kraken", "velociraptor")
    confusionMatrix.update("velociraptor", "velociraptor")
    
    confusionMatrix.plot()
    

    The class ConfusionMatrix:

    import pylab
    import collections
    import numpy as np
    
    
    class ConfusionMatrix:
        def __init__(self, labels):
            self.labels = labels
            self.confusion_dictionary = self.build_confusion_dictionary(labels)
    
        def update(self, predicted_label, expected_label):
            self.confusion_dictionary[expected_label][predicted_label] += 1
    
        def build_confusion_dictionary(self, label_set):
            expected_labels = collections.OrderedDict()
    
            for expected_label in label_set:
                expected_labels[expected_label] = collections.OrderedDict()
    
                for predicted_label in label_set:
                    expected_labels[expected_label][predicted_label] = 0.0
    
            return expected_labels
    
        def convert_to_matrix(self, dictionary):
            length = len(dictionary)
            confusion_dictionary = np.zeros((length, length))
    
            i = 0
            for row in dictionary:
                j = 0
                for column in dictionary:
                    confusion_dictionary[i][j] = dictionary[row][column]
                    j += 1
                i += 1
    
            return confusion_dictionary
    
        def get_confusion_matrix(self):
            matrix = self.convert_to_matrix(self.confusion_dictionary)
            return self.normalize(matrix)
    
        def normalize(self, matrix):
            amin = np.amin(matrix)
            amax = np.amax(matrix)
    
            return [[(((y - amin) * (1 - 0)) / (amax - amin)) for y in x] for x in matrix]
    
        def plot(self):
            matrix = self.get_confusion_matrix()
    
            pylab.figure()
            pylab.imshow(matrix, interpolation='nearest', cmap=pylab.cm.jet)
            pylab.title("Confusion Matrix")
    
            for i, vi in enumerate(matrix):
                for j, vj in enumerate(vi):
                    pylab.text(j, i+.1, "%.1f" % vj, fontsize=12)
    
            pylab.colorbar()
    
            classes = np.arange(len(self.labels))
            pylab.xticks(classes, self.labels)
            pylab.yticks(classes, self.labels)
    
            pylab.ylabel('Expected label')
            pylab.xlabel('Predicted label')
            pylab.show()
    
    0 讨论(0)
  • 2020-12-04 07:19

    Update

    Since writing this post, I've updated my library implementation to include a few other nice features. As with the code below, no third-party dependencies are required. The class can also output a nice tabulation table, similar to many commonly used statistical packages. See this Gist.

    Example usage of the above Gist

    # Example Usage
    actual      = ["A", "B", "C", "C", "B", "C", "C", "B", "A", "A", "B", "A", "B", "C", "A", "B", "C"]
    predicted   = ["A", "B", "B", "C", "A", "C", "A", "B", "C", "A", "B", "B", "B", "C", "A", "A", "C"]
    
    # Initialize Performance Class
    performance = Performance(actual, predicted)
    
    # Print Confusion Matrix
    performance.tabulate()
    

    Here's an example of the output:

    ===================================
            Aᴬ      Bᴬ      Cᴬ
    
    Aᴾ      3       2       1
    
    Bᴾ      1       4       1
    
    Cᴾ      1       0       4
    
    Note: classᴾ = Predicted, classᴬ = Actual
    ===================================
    

    In addition to raw counts, we can output a normalized confusion matrix (i.e. with proportions)

    # Print Normalized Confusion Matrix
    performance.tabulate(normalized = True)
    
    ===================================
            Aᴬ      Bᴬ      Cᴬ
    
    Aᴾ      17.65%  11.76%  5.88%
    
    Bᴾ      5.88%   23.53%  5.88%
    
    Cᴾ      5.88%   0.00%   23.53%
    
    Note: classᴾ = Predicted, classᴬ = Actual
    ===================================
    

    A Simple Multiclass Implementation

    A multi-class confusion matrix can be computed incredibly simply with vanilla Python in roughly O(N) time. All we need to do is pair up the unique classes found in the actual vector into a 2-dimensional list. From there, we simply iterate through the zipped actual and predicted vectors and populate the counts.

    # A Simple Confusion Matrix Implementation
    def confusionmatrix(actual, predicted, normalize = False):
        """
        Generate a confusion matrix for multiple classification
        @params:
            actual      - a list of integers or strings for known classes
            predicted   - a list of integers or strings for predicted classes
            normalize   - optional boolean for matrix normalization
        @return:
            matrix      - a 2-dimensional list of pairwise counts
        """
        unique = sorted(set(actual))
        matrix = [[0 for _ in unique] for _ in unique]
        imap   = {key: i for i, key in enumerate(unique)}
        # Generate Confusion Matrix
        for p, a in zip(predicted, actual):
            matrix[imap[p]][imap[a]] += 1
        # Matrix Normalization
        if normalize:
            sigma = sum([sum(matrix[imap[i]]) for i in unique])
            matrix = [row for row in map(lambda i: list(map(lambda j: j / sigma, i)), matrix)]
        return matrix
    

    Usage

    # Input Below Should Return: [[2, 1, 0], [0, 2, 1], [1, 2, 1]]
    cm = confusionmatrix(
        [1, 1, 2, 0, 1, 1, 2, 0, 0, 1], # actual
        [0, 1, 1, 0, 2, 1, 2, 2, 0, 2]  # predicted
    )
    
    # And The Output
    print(cm)
    [[2, 1, 0], [0, 2, 1], [1, 2, 1]]
    

    Note: the actual classes are along the columns and the predicted classes are along the rows.

    # Actual
    # 0  1  2
      #  #  #   
    [[2, 1, 0], # 0
     [0, 2, 1], # 1  Predicted
     [1, 2, 1]] # 2
    

    Class Names Can be Strings or Integers

    # Input Below Should Return: [[2, 1, 0], [0, 2, 1], [1, 2, 1]]
    cm = confusionmatrix(
        ["B", "B", "C", "A", "B", "B", "C", "A", "A", "B"], # actual
        ["A", "B", "B", "A", "C", "B", "C", "C", "A", "C"]  # predicted
    )
    
    # And The Output
    print(cm)
    [[2, 1, 0], [0, 2, 1], [1, 2, 1]]
    

    You Can Also Return The Matrix With Proportions (Normalization)

    # Input Below Should Return: [[0.2, 0.1, 0.0], [0.0, 0.2, 0.1], [0.1, 0.2, 0.1]]
    cm = confusionmatrix(
        ["B", "B", "C", "A", "B", "B", "C", "A", "A", "B"], # actual
        ["A", "B", "B", "A", "C", "B", "C", "C", "A", "C"], # predicted
        normalize = True
    )
    
    # And The Output
    print(cm)
    [[0.2, 0.1, 0.0], [0.0, 0.2, 0.1], [0.1, 0.2, 0.1]]
    

    Extracting Statistics From a Multiple Classification Confusion Matrix

    Once you have the matrix, you can compute a bunch of statistics to assess your classifier. That said, extracting the values out of a confusion matrix setup for multiple classification can be a bit of a headache. Here's a function that returns both the confusion matrix and statistics by class:

    # Not Required, But Nice For Legibility
    from collections import OrderedDict
    
    # A Simple Confusion Matrix Implementation
    def confusionmatrix(actual, predicted, normalize = False):
        """
        Generate a confusion matrix for multiple classification
        @params:
            actual      - a list of integers or strings for known classes
            predicted   - a list of integers or strings for predicted classes
        @return:
            matrix      - a 2-dimensional list of pairwise counts
            statistics  - a dictionary of statistics for each class
        """
        unique = sorted(set(actual))
        matrix = [[0 for _ in unique] for _ in unique]
        imap   = {key: i for i, key in enumerate(unique)}
        # Generate Confusion Matrix
        for p, a in zip(predicted, actual):
            matrix[imap[p]][imap[a]] += 1
        # Get Confusion Matrix Sum
        sigma = sum([sum(matrix[imap[i]]) for i in unique])
        # Scaffold Statistics Data Structure
        statistics = OrderedDict(((i, {"counts" : OrderedDict(), "stats" : OrderedDict()}) for i in unique))
        # Iterate Through Classes & Compute Statistics
        for i in unique:
            loc = matrix[imap[i]][imap[i]]
            row = sum(matrix[imap[i]][:])
            col = sum([row[imap[i]] for row in matrix])
            # Get TP/TN/FP/FN
            tp  = loc
            fp  = row - loc
            fn  = col - loc
            tn  = sigma - row - col + loc
            # Populate Counts Dictionary
            statistics[i]["counts"]["tp"]   = tp
            statistics[i]["counts"]["fp"]   = fp
            statistics[i]["counts"]["tn"]   = tn
            statistics[i]["counts"]["fn"]   = fn
            statistics[i]["counts"]["pos"]  = tp + fn
            statistics[i]["counts"]["neg"]  = tn + fp
            statistics[i]["counts"]["n"]    = tp + tn + fp + fn
            # Populate Statistics Dictionary
            statistics[i]["stats"]["sensitivity"]   = tp / (tp + fn) if tp > 0 else 0.0
            statistics[i]["stats"]["specificity"]   = tn / (tn + fp) if tn > 0 else 0.0
            statistics[i]["stats"]["precision"]     = tp / (tp + fp) if tp > 0 else 0.0
            statistics[i]["stats"]["recall"]        = tp / (tp + fn) if tp > 0 else 0.0
            statistics[i]["stats"]["tpr"]           = tp / (tp + fn) if tp > 0 else 0.0
            statistics[i]["stats"]["tnr"]           = tn / (tn + fp) if tn > 0 else 0.0
            statistics[i]["stats"]["fpr"]           = fp / (fp + tn) if fp > 0 else 0.0
            statistics[i]["stats"]["fnr"]           = fn / (fn + tp) if fn > 0 else 0.0
            statistics[i]["stats"]["accuracy"]      = (tp + tn) / (tp + tn + fp + fn) if (tp + tn) > 0 else 0.0
            statistics[i]["stats"]["f1score"]       = (2 * tp) / ((2 * tp) + (fp + fn)) if tp > 0 else 0.0
            statistics[i]["stats"]["fdr"]           = fp / (fp + tp) if fp > 0 else 0.0
            statistics[i]["stats"]["for"]           = fn / (fn + tn) if fn > 0 else 0.0
            statistics[i]["stats"]["ppv"]           = tp / (tp + fp) if tp > 0 else 0.0
            statistics[i]["stats"]["npv"]           = tn / (tn + fn) if tn > 0 else 0.0
        # Matrix Normalization
        if normalize:
            matrix = [row for row in map(lambda i: list(map(lambda j: j / sigma, i)), matrix)]
        return matrix, statistics
    

    Computed Statistics

    Above, the confusion matrix is used to tabulate statistics for each class, which are returned in an OrderedDict with the following structure:

    OrderedDict(
        [
            ('A', {
                'stats' : OrderedDict([
                    ('sensitivity', 0.6666666666666666), 
                    ('specificity', 0.8571428571428571), 
                    ('precision', 0.6666666666666666), 
                    ('recall', 0.6666666666666666), 
                    ('tpr', 0.6666666666666666), 
                    ('tnr', 0.8571428571428571), 
                    ('fpr', 0.14285714285714285), 
                    ('fnr', 0.3333333333333333), 
                    ('accuracy', 0.8), 
                    ('f1score', 0.6666666666666666), 
                    ('fdr', 0.3333333333333333), 
                    ('for', 0.14285714285714285), 
                    ('ppv', 0.6666666666666666), 
                    ('npv', 0.8571428571428571)
                ]), 
                'counts': OrderedDict([
                    ('tp', 2), 
                    ('fp', 1), 
                    ('tn', 6), 
                    ('fn', 1), 
                    ('pos', 3), 
                    ('neg', 7), 
                    ('n', 10)
                ])
            }), 
            ('B', {
                'stats': OrderedDict([
                    ('sensitivity', 0.4), 
                    ('specificity', 0.8), 
                    ('precision', 0.6666666666666666), 
                    ('recall', 0.4), 
                    ('tpr', 0.4), 
                    ('tnr', 0.8), 
                    ('fpr', 0.2), 
                    ('fnr', 0.6), 
                    ('accuracy', 0.6), 
                    ('f1score', 0.5), 
                    ('fdr', 0.3333333333333333), 
                    ('for', 0.42857142857142855), 
                    ('ppv', 0.6666666666666666), 
                    ('npv', 0.5714285714285714)
                ]), 
                'counts': OrderedDict([
                    ('tp', 2), 
                    ('fp', 1), 
                    ('tn', 4), 
                    ('fn', 3), 
                    ('pos', 5), 
                    ('neg', 5), 
                    ('n', 10)
                ])
            }), 
            ('C', {
                'stats': OrderedDict([
                    ('sensitivity', 0.5), 
                    ('specificity', 0.625), 
                    ('precision', 0.25), 
                    ('recall', 0.5), 
                    ('tpr', 0.5), 
                    ('tnr', 0.625), (
                    'fpr', 0.375), (
                    'fnr', 0.5), 
                    ('accuracy', 0.6), 
                    ('f1score', 0.3333333333333333), 
                    ('fdr', 0.75), 
                    ('for', 0.16666666666666666), 
                    ('ppv', 0.25), 
                    ('npv', 0.8333333333333334)
                ]), 
                'counts': OrderedDict([
                    ('tp', 1), 
                    ('fp', 3), 
                    ('tn', 5), 
                    ('fn', 1), 
                    ('pos', 2), 
                    ('neg', 8), 
                    ('n', 10)
                ])
            })
        ]
    )
    
    0 讨论(0)
  • 2020-12-04 07:21

    You can make your code more concise and (sometimes) to run faster using numpy. For example, in two-classes case your function can be rewritten as (see mply.acc()):

    def accuracy(actual, predicted):
        """accuracy = (tp + tn) / ts
    
        , where:    
    
            ts - Total Samples
            tp - True Positives
            tn - True Negatives
        """
        return (actual == predicted).sum() / float(len(actual))
    

    , where:

    actual    = (numpy.array(input_arr) == 2)
    predicted = (numpy.array(prob_arr) < 0.5)
    
    0 讨论(0)
  • 2020-12-04 07:21

    Here's a confusion matrix class that supports pretty-printing, etc:

    http://nltk.googlecode.com/svn/trunk/doc/api/nltk.metrics.confusionmatrix-pysrc.html

    0 讨论(0)
  • 2020-12-04 07:21

    You should map from classes to a row in your confusion matrix.

    Here the mapping is trivial:

    def row_of_class(classe):
        return {1: 0, 2: 1}[classe]
    

    In your loop, compute expected_row, correct_row, and increment conf_arr[expected_row][correct_row]. You'll even have less code than what you started with.

    0 讨论(0)
提交回复
热议问题