BenchMetrics: a systematic benchmarking method for binary classification performance metrics


Canbek G., Taşkaya Temizel T., Sağıroğlu Ş.

NEURAL COMPUTING & APPLICATIONS, vol.33, no.21, pp.14623-14650, 2021 (Peer-Reviewed Journal) identifier identifier

  • Publication Type: Article / Article
  • Volume: 33 Issue: 21
  • Publication Date: 2021
  • Doi Number: 10.1007/s00521-021-06103-6
  • Journal Name: NEURAL COMPUTING & APPLICATIONS
  • Journal Indexes: Science Citation Index Expanded, Scopus, Academic Search Premier, PASCAL, Applied Science & Technology Source, Biotechnology Research Abstracts, Compendex, Computer & Applied Sciences, Index Islamicus, INSPEC, zbMATH
  • Page Numbers: pp.14623-14650
  • Keywords: Binary classification, Performance metric, Performance evaluation, Benchmarking, Meta-metric, ACCURACY

Abstract

This paper proposes a systematic benchmarking method called BenchMetrics to analyze and compare the robustness of binary classification performance metrics based on the confusion matrix for a crisp classifier. BenchMetrics, introducing new concepts such as meta-metrics (metrics about metrics) and metric space, has been tested on fifteen well-known metrics including balanced accuracy, normalized mutual information, Cohen's Kappa, and Matthews correlation coefficient (MCC), along with two recently proposed metrics, optimized precision and index of balanced accuracy in the literature. The method formally presents a pseudo-universal metric space where all the permutations of confusion matrix elements yielding the same sample size are calculated. It evaluates the metrics and metric spaces in a two-staged benchmark based on our proposed eighteen new criteria and finally ranks the metrics by aggregating the criteria results. The mathematical evaluation stage analyzes metrics' equations, specific confusion matrix variations, and corresponding metric spaces. The second stage, including seven novel meta-metrics, evaluates the robustness aspects of metric spaces. We interpreted each benchmarking result and comparatively assessed the effectiveness of BenchMetrics with the limited comparison studies in the literature. The results of BenchMetrics have demonstrated that widely used metrics have significant robustness issues, and MCC is the most robust and recommended metric for binary classification performance evaluation.