How to measure the success and percent accuracy of an image detection algorithm?

一个人想着一个人 提交于 2019-12-01 21:19:42

You can calculate what is know as the F1 Score (sometimes just F Score) by first calculating the precision and recall performance of your algorithm.

The precision is the number of true positives divided by the number of predicted positives, where predicted positives = (true positives + false positives).

The recall is the number of true positives divided by the number of actual positives, where actual positives = (true positives + false negatives).

In other words, precision means, "Of all objects where we detected a match, what fraction actually does match?" And recall means "Of all objects that actually match, what fraction did we correctly detect as matching?".

Having calculated precision, P, and recall, R, the F1 Score is 2 * (PR / (P + R)) and gives you a single metric - between 0 and 1 - with which to compare the performance of different algorithms.

The F1 Score is a statistical measure used, among other applications, in machine learning. You can read more about it in this Wikipedia entry.

Here are some measures/metrics that you can use to evaluate your model for image segmentation (or object detection:

  • F1 Score
  • Dice
  • Shape similarity

All of the three are described in this page of a segmentation challenge

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!