7th IEEE Global Conference on Signal and Information Processing (IEEE GlobalSIP), Ottawa, Canada, 11 - 14 November 2019
A new method to objectively measure pain using computer vision and machine learning technologies is presented. Our method seeks to capture facial expressions of pain to detect pain, especially when a patients cannot communicate pain verbally. This approach relies on using Facial muscle-based Action Units (AUs), defined by the Facial Action Coding System (FACS), that are associated with pain. It is impractical to use human FACS coding experts in clinical settings to perform this task as it is too labor-intensive and recent research has sought computer-based solutions to the problem. An effective automated system for performing the task is proposed here in which we develop an end-to-end deep learning-based Automated Facial Expression Recognition (AFER) that jointly detects the complete set of pain-related AUs. The facial video clip is processed frame by frame to estimate a vector of AU likelihood values for each frame using a deep convolutional neural network. The AU vectors are concatenated to form a table of AU values for a given video clip. Our results show significantly improved performance compared with those obtained with other known methods.