One Metric to Measure them All: Localisation Recall Precision (LRP) for Evaluating Visual Detection Tasks

Creative Commons License

Oksuz K., Cam B. C. , KALKAN S., AKBAŞ E.

IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021 (Peer-Reviewed Journal) identifier identifier

  • Publication Type: Article / Article
  • Publication Date: 2021
  • Doi Number: 10.1109/tpami.2021.3130188
  • Journal Name: IEEE Transactions on Pattern Analysis and Machine Intelligence
  • Journal Indexes: Science Citation Index Expanded, Scopus, Academic Search Premier, PASCAL, ABI/INFORM, Aerospace Database, Applied Science & Technology Source, Business Source Elite, Business Source Premier, Communication Abstracts, Compendex, Computer & Applied Sciences, EMBASE, INSPEC, MEDLINE, Metadex, zbMATH, Civil Engineering Abstracts
  • Keywords: Localisation Recall Precision Average Precision Panoptic Quality Object Detection Keypoint Detection Instance Segmentation Panoptic Segmentation Performance Metric Threshold


IEEEDespite being widely used as a performance measure for visual detection tasks, Average Precision (AP) is limited in (i) reflecting localisation quality, (ii) interpretability and (iii) robustness to the design choices regarding its computation, and its applicability to outputs without confidence scores. Panoptic Quality (PQ), a measure proposed for evaluating panoptic segmentation (Kirillov et al., 2019), does not suffer from these limitations but is limited to panoptic segmentation. In this paper, we propose Localisation Recall Precision (LRP) Error as the performance measure for all visual detection tasks. LRP Error, initially proposed only for object detection by Oksuz et al. (2018), does not suffer from the aforementioned limitations and is applicable to all visual detection tasks. We also introduce Optimal LRP (oLRP) Error as the minimum LRP error obtained over confidence scores to evaluate visual detectors and obtain optimal thresholds for deployment. We provide a detailed comparative analysis of LRP with AP and PQ, and use nearly 100 state-of-the-art visual detectors from seven visual detection tasks (i.e. object detection, keypoint detection, instance segmentation, panoptic segmentation, visual relationship detection, zero-shot detection and generalised zero-shot detection) using ten datasets to empirically show that LRP provides richer and more discriminative information than its counterparts. Code available at: