How to save best model in Keras based on AUC metric?

走远了吗. 提交于 2021-02-18 18:00:33

问题


I would like to save the best model in Keras based on auc and I have this code:

def MyMetric(yTrue, yPred):
    auc = tf.metrics.auc(yTrue, yPred)
    return auc

best_model = [ModelCheckpoint(filepath='best_model.h5', monitor='MyMetric', save_best_only=True)]

train_history = model.fit([train_x], 
          [train_y], batch_size=batch_size, epochs=epochs, validation_split=0.05, 
                          callbacks=best_model, verbose = 2)

SO my model runs nut I get this warning:

RuntimeWarning: Can save best model only with MyMetric available, skipping.
  'skipping.' % (self.monitor), RuntimeWarning)

It would be great if any can tell me this is the right way to do it and if not what should I do?


回答1:


You have to pass the Metric you want to monitor to model.compile.

https://keras.io/metrics/#custom-metrics

model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=[MyMetric])

Also, tf.metrics.auc returns a tuple containing the tensor and update_op. Keras expects the custom metric function to return only a tensor.

def MyMetric(yTrue, yPred):
    import tensorflow as tf
    auc = tf.metrics.auc(yTrue, yPred)
    return auc[0]

After this step, you will get errors about uninitialized values. Please see these threads:

https://github.com/keras-team/keras/issues/3230

How to compute Receiving Operating Characteristic (ROC) and AUC in keras?




回答2:


You can define a custom metric that calls tensorflow to compute AUROC in the following way:

def as_keras_metric(method):
    import functools
    from keras import backend as K
    import tensorflow as tf
    @functools.wraps(method)
    def wrapper(self, args, **kwargs):
        """ Wrapper for turning tensorflow metrics into keras metrics """
        value, update_op = method(self, args, **kwargs)
        K.get_session().run(tf.local_variables_initializer())
        with tf.control_dependencies([update_op]):
            value = tf.identity(value)
        return value
    return wrapper

@as_keras_metric
def AUROC(y_true, y_pred, curve='ROC'):
    return tf.metrics.auc(y_true, y_pred, curve=curve)

You then need to compile your model with this metric:

model.compile(loss=train_loss, optimizer='adam', metrics=['accuracy',AUROC])

Finally: Checkpoint the model in the following way:

model_checkpoint = keras.callbacks.ModelCheckpoint(path_to_save_model, monitor='val_AUROC', 
                                                   verbose=0, save_best_only=True, 
                                                   save_weights_only=False, mode='auto', period=1)

Be careful though: I believe the Validation AUROC is calculated batch wise and averaged; so might give some errors with checkpointing. A good idea might be to verify after model training finishes that the AUROC of the predictions of the trained model (computed with sklearn.metrics) matches what Tensorflow reports while training and checkpointing




回答3:


Assuming you use TensorBoard, then you have a historical record—in the form of tfevents files—of all your metric calculations, for all your epochs; then a tf.keras.callbacks.Callback is what you want.

I use tf.keras.callbacks.ModelCheckpoint with save_freq: 'epoch' to save—as an h5 file or tf file—the weights for each epoch.

To avoid filling the hard-drive with model files, write a new Callback—or extend the ModelCheckpoint class's—on_epoch_end implementation:

def on_epoch_end(self, epoch, logs=None):
    super(DropWorseModels, self).on_epoch_end(epoch, logs)
    if epoch < self._keep_best:
        return

    model_files = frozenset(
        filter(lambda filename: path.splitext(filename)[1] == SAVE_FORMAT_WITH_SEP,
               listdir(self._model_dir)))

    if len(model_files) < self._keep_best:
        return

    tf_events_logs = tuple(islice(log_parser(tfevents=path.join(self._log_dir,
                                                                self._split),
                                             tag=self.monitor),
                                  0,
                                  self._keep_best))
    keep_models = frozenset(map(self._filename.format,
                                map(itemgetter(0), tf_events_logs)))

    if len(keep_models) < self._keep_best:
        return

    it_consumes(map(lambda filename: remove(path.join(self._model_dir, filename)),
                    model_files - keep_models))

Appendix (imports and utility function implementations):

from itertools import islice
from operator import itemgetter
from os import path, listdir, remove
from collections import deque

import tensorflow as tf
from tensorflow.core.util import event_pb2


def log_parser(tfevents, tag):
    values = []
    for record in tf.data.TFRecordDataset(tfevents):
        event = event_pb2.Event.FromString(tf.get_static_value(record))
        if event.HasField('summary'):
            value = event.summary.value.pop(0)
            if value.tag == tag:
                values.append(value.simple_value)

    return tuple(sorted(enumerate(values), key=itemgetter(1), reverse=True))

it_consumes = lambda it, n=None: deque(it, maxlen=0) if n is None \
                                 else next(islice(it, n, n), None)

SAVE_FORMAT = 'h5'
SAVE_FORMAT_WITH_SEP = '{}{}'.format(path.extsep, SAVE_FORMAT)

For completeness, the rest of the class:

class DropWorseModels(tf.keras.callbacks.Callback):
    """
    Designed around making `save_best_only` work for arbitrary metrics
             and thresholds between metrics
    """

    def __init__(self, model_dir, monitor, log_dir, keep_best=2, split='validation'):
        """
        Args:
            model_dir: directory to save weights. Files will have format
                        '{model_dir}/{epoch:04d}.h5'.
            split: dataset split to analyse, e.g., one of 'train', 'test', 'validation'
            monitor: quantity to monitor.
            log_dir: the path of the directory where to save the log files to be
                        parsed by TensorBoard.
            keep_best: number of models to keep, sorted by monitor value
        """
        super(DropWorseModels, self).__init__()
        self._model_dir = model_dir
        self._split = split
        self._filename = 'model-{:04d}' + SAVE_FORMAT_WITH_SEP
        self._log_dir = log_dir
        self._keep_best = keep_best
        self.monitor = monitor

This has the added advantage of being able to save and delete multiple model files in a single Callback. You can easily extend with different thresholding support, e.g., to keep all model files with an AUC in threshold OR TP, FP, TN, FN within threshold.



来源:https://stackoverflow.com/questions/55153983/how-to-save-best-model-in-keras-based-on-auc-metric

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!