IEEE SIGNAL PROCESSING MAGAZINE, vol.38, no.6, pp.39-49, 2021 (SCI-Expanded)
Given the increasing prevalence of facial analysis technology, the problem of bias in the tools is now becoming an even greater source of concern. Several studies have highlighted the pervasiveness of such discrimination, and many have sought to address the problem by proposing solutions to mitigate it. Despite this effort, to date, understanding, investigating, and mitigating bias for facial affect analysis remain an understudied problem. In this work we aim to provide a guide by 1) providing an overview of the various definitions of bias and measures of fairness within the field of facial affective signal processing and 2) categorizing the algorithms and techniques that can be used to investigate and mitigate bias in facial affective signal processing. We present the opportunities and limitations within the current body of work, discuss the gathered findings, and propose areas that call for further research.