Understanding Recall and Precision

陌路散爱 提交于 2020-08-02 04:39:48

问题


I am currently learning Information retrieval and i am rather stuck with an example of recall and precision

A searcher uses a search engine to look for information. There are 10 documents on the first screen of results and 10 on the second.

Assuming there is known to be 10 relevant documents in the search engines index.

Soo... there is 20 searches all together of which 10 are relevant.

Can anyone help me make sense of this?

Thanks


回答1:


Recall and precision measure the quality of your result. To understand them let's first define the types of results. A document in your returned list can either be

  • classified correctly

    • a true positive (TP): a document which is relevant (positive) that was indeed returned (true)
    • a true negative (TN): a document which is not relevant (negative) that was indeed NOT returned (true)
  • misclassified

    • a false positive (FP): a document which is not relevant but was returned positive
    • a false negative (FN): a document which is relevant but was not returned negative

the precision is then:

|TP| / (|TP| + |FP|)

i.e. the fraction of retrieved documents which are indeed relevant

the recall is then:

|TP| / (|TP| + |FN|)

i.e. the fraction of relevant documents which are in your result set

So, in your example 10 out of 20 results are relevant. This gives you a precision of 0.5. If there are no more than these 10 relevant documents, you have got a recall of 1.

(When measuring the performance of an Information Retrieval system it only makes sense to consider both precision and recall. You can easily get a precision of 100% by returning no result at all (i.e. no spurious returned instance => no FP) or a recall of 100% by returning every instance (i.e. no relevant document was missed => no FN). )




回答2:


Well, this is an extension of my answer on recall at: https://stackoverflow.com/a/63120204/6907424. First read about precision here and than go to read recall. Here I am only explaining Precision using the same example:

ExampleNo        Ground-truth        Model's Prediction
   0                 Cat                   Cat
   1                 Cat                   Dog
   2                 Cat                   Cat
   3                 Dog                   Cat
   4                 Dog                   Dog

For now I am calculating precision for Cat. So Cat is our Positive Class and the rest of the classes (Here Dog only) are the Negative Classes. Precision means what the percentage of positive detection was actually positive. So here for Cat there are 3 detection by the model. But are all of them correct? No! Out of them only 2 are correct (in example 0 and 2) and another is wrong (in example 3). So the percentage of correct detection is 2 out of 3 which is (2 / 3) * 100 = 66.67%.

Now coming to the formulation, here:

TP (True positive): Predicting something positive when it is actually positive. If cat is our positive example then predicting something a cat when it is actually a cat.

FP (False positive): Predicting something as positive but which is not actually positive, i.e, saying something positive "falsely".

Now the number of correct detection of a certain class is the number of TP of that class. But apart from them the model also predicted some other examples as positives but which were not actually positives and so these are the false positives (FP). So irrespective of correct or wrong the total number of positive class detected by the model is TP + FP. So the percentage of correct detection of positive class among all detection of that class will be: TP / (TP + FP) which is the precision of the detection of that class.

Like recall we can also generalize this formula for any number of classes. Just take one class at a time and consider it as the positive class and the rest of the classes as negative classes and continue the same process for all of the classes to calculate precision for each of them.

You can calculate precision and recall in another way (basically the other way of thinking the same formulae). Say for Cat, first count the number of examples having Cat in both Ground-truth and Model's prediction (i.e, the number of TP). Therefore if you are calculating precision then divide this count with the number of "Cat"s in the Model's Prediction. Otherwise for recall divide with the number of "Cat"s in the Ground-truth. This works as the same as the formulae of precision and recall. If you can't understand why then you should think for a while and review what actually TP, FP, TN and FN means.




回答3:


If you have difficulty understanding precision and recall, consider reading this

https://medium.com/seek-product-management/8-out-of-10-brown-cats-6e39a22b65dc



来源:https://stackoverflow.com/questions/21413256/understanding-recall-and-precision

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!