Uncertainty as a Fairness Measure


Kuzucu S., Cheong J., Gunes H., KALKAN S.

Journal of Artificial Intelligence Research, cilt.81, ss.307-335, 2024 (SCI-Expanded) identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 81
  • Basım Tarihi: 2024
  • Doi Numarası: 10.1613/jair.1.16041
  • Dergi Adı: Journal of Artificial Intelligence Research
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus, Applied Science & Technology Source, Compendex, Computer & Applied Sciences, INSPEC, Linguistic Bibliography, zbMATH, Directory of Open Access Journals
  • Sayfa Sayıları: ss.307-335
  • Orta Doğu Teknik Üniversitesi Adresli: Evet

Özet

Unfair predictions of machine learning (ML) models impede their broad acceptance in real-world settings. Tackling this arduous challenge first necessitates defining what it means for an ML model to be fair. This has been addressed by the ML community with various measures of fairness that depend on the prediction outcomes of the ML models, either at the group-level or the individual-level. These fairness measures are limited in that they utilize point predictions, neglecting their variances, or uncertainties, making them susceptible to noise, missingness and shifts in data. In this paper, we first show that a ML model may appear to be fair with existing point-based fairness measures but biased against a demographic group in terms of prediction uncertainties. Then, we introduce new fairness measures based on different types of uncertainties, namely, aleatoric uncertainty and epistemic uncertainty. We demonstrate on many datasets that (i) our uncertainty-based measures are complementary to existing measures of fairness, and (ii) they provide more insights about the underlying issues leading to bias.