问题
I am using Keras to build a CNN and I have come to a misunderstanding about what the Accuracy metric does exactly.
I have done some research and it appears that it returns the Accuracy of the model. Where is this information stored exactly? Does this metric effect the epoch results?
I cannot find any resources that actually describe in depth what the Accuracy metric does. How are my results affected by using this metric?
model.compile(
loss="sparse_categorical_crossentropy",
optimizer='adam',
metrics=['accuracy']
)
The Keras documentation does not explain the purpose of this metric.
回答1:
In case of your question it is easier to check the Keras source code, because any Deep Learning framework has a poor documentation.
Firstly, you need to find how string representations are processed:
if metric in ('accuracy', 'acc'):
metric_fn = metrics_module.categorical_accuracy
This follows to metric module where the categorical_accuracy function is defined:
def categorical_accuracy(y_true, y_pred):
return K.cast(K.equal(K.argmax(y_true, axis=-1),
K.argmax(y_pred, axis=-1)),
K.floatx())
It is clear that the function returns a tensor, and just a number presented in logs, so there is a wrapper function for processing the tensor with comparison results:
weighted_metric_fn = weighted_masked_objective(metric_fn)
This wrapper function contains the logic for calculating the final values. As no weights and masks are defined, just a simple averaging is used:
return K.mean(score_array)
So, there is an equation as a result:
P.S. I slightly disagree with @VnC, because accuracy and precision are different terms. Accuracy shows the rate of correct predictions in a classification task, and precision shows the rate of positive predicted values (more).
回答2:
It is only used to report on your model performance and shouldn't affect it in any way, e.g. how accurate your predictions are.
Accuracy basically means precision:
precision = true_positives / ( true_positives + false_positives )
I would recommend using f1_score (link) as it combines precision and recall.
Hope that clears it up.
回答3:
Any metric is a function of the model's predictions and the ground truth, same as a loss. The accuracy of a model by itself makes no sense, its not a property of only the model, but also of the dataset where the model is being evaluated.
Accuracy in particular is a metric used for classification, and it is just the ratio between the number of correct predictions (prediction equal to label), and the total number of data points in the dataset.
Any metric that is evaluated during training/evaluation is information for you, its not used to train the model. Only the loss function is used for actual training of the weights.
来源:https://stackoverflow.com/questions/55655972/the-accuracy-metric-purpose