how to implement custom metric in keras?

后端 未结 3 743
迷失自我
迷失自我 2020-12-05 04:37

I get this error :

sum() got an unexpected keyword argument \'out\'

when I run this code:

import pandas as pd,          


        
相关标签:
3条回答
  • 2020-12-05 05:15

    The problem is that y_pred and y_true are not NumPy arrays but either Theano or TensorFlow tensors. That's why you got this error.

    You can define your custom metrics but you have to remember that its arguments are those tensors – not NumPy arrays.

    0 讨论(0)
  • 2020-12-05 05:35

    Here I'm answering to OP's topic question rather than his exact problem. I'm doing this as the question shows up in the top when I google the topic problem.

    You can implement a custom metric in two ways.

    1. As mentioned in Keras docu.

      import keras.backend as K
      
      def mean_pred(y_true, y_pred):
          return K.mean(y_pred)
      
      model.compile(optimizer='sgd',
                loss='binary_crossentropy',
                metrics=['accuracy', mean_pred])
      

      But here you have to remember as mentioned in Marcin Możejko's answer that y_true and y_pred are tensors. So in order to correctly calculate the metric you need to use keras.backend functionality. Please look at this SO question for details How to calculate F1 Macro in Keras?

    2. Or you can implement it in a hacky way as mentioned in Keras GH issue. For that you need to use callbacks argument of model.fit.

      import keras as keras
      import numpy as np
      from keras.optimizers import SGD
      from sklearn.metrics import roc_auc_score
      
      model = keras.models.Sequential()
      # ...
      sgd = SGD(lr=0.001, momentum=0.9)
      model.compile(optimizer=sgd, loss='categorical_crossentropy', metrics=['accuracy'])
      
      
      class Metrics(keras.callbacks.Callback):
          def on_train_begin(self, logs={}):
              self._data = []
      
          def on_epoch_end(self, batch, logs={}):
              X_val, y_val = self.validation_data[0], self.validation_data[1]
              y_predict = np.asarray(model.predict(X_val))
      
              y_val = np.argmax(y_val, axis=1)
              y_predict = np.argmax(y_predict, axis=1)
      
              self._data.append({
                  'val_rocauc': roc_auc_score(y_val, y_predict),
              })
              return
      
          def get_data(self):
              return self._data
      
      metrics = Metrics()
      history = model.fit(X_train, y_train, epochs=100, validation_data=(X_val, y_val), callbacks=[metrics])
      metrics.get_data()
      
    0 讨论(0)
  • you can pass a model.predict() in your AUC metric function. [this will iterate on bacthes so you might be better off using model.predict_on_batch(). Assuming you have something like a softmax layer as output (something that outputs probabilities), then you can use that together with sklearn.metric to get the AUC.

    from sklearn.metrics import roc_curve, auc
    

    from here

    def sklearnAUC(test_labels,test_prediction):
        n_classes = 2
        # Compute ROC curve and ROC area for each class
        fpr = dict()
        tpr = dict()
        roc_auc = dict()
        for i in range(n_classes):
            # ( actual labels, predicted probabilities )
            fpr[i], tpr[i], _ = roc_curve(test_labels[:, i], test_prediction[:, i])
            roc_auc[i] = auc(fpr[i], tpr[i])
    
        return round(roc_auc[0],3) , round(roc_auc[1],3)
    

    now make your metric

    # gives a numpy array like so [ [0.3,0.7] , [0.2,0.8] ....]    
    Y_pred = model.predict_on_batch ( X_test  ) 
    # Y_test looks something like [ [0,1] , [1,0] .... ]
    # auc1 and auc2 should be equal
    auc1 , auc2 = sklearnAUC(  Y_test ,  Y_pred )
    
    0 讨论(0)
提交回复
热议问题