Multi-Task Learning by Leveraging Non-Contact Heart Rate for Robust Facial Emotion Recognition
Citations

WEB OF SCIENCE

2
Citations

SCOPUS

4

초록

Building a robust facial expression recognition (FER) system remains a challenging problem due to the emotional ambiguity of facial expressions. Recent approaches employ both facial expressions and physiological signals to design multi-modal emotion recognition systems. However, these approaches require physical contact with the skin as they need to use sensor modalities. To meet the demands for a non-contact emotion recognition system, we use a convolutional recurrent neural network (CRNN) to extract facial features and utilize these features for estimating the heart rate (HR) from face image sequences. In particular, unlike the conventional feature fusion method, we propose a multi-task learning (MTL) framework to simultaneously predict the emotion and HR from face image sequences using a single model. Experiments on the DEAP and MAHNOB-HCI datasets demonstrate that the proposed multi-task framework improves FER accuracy by up to 6.85% and achieves superior performance against the state-of-the-art methods.

키워드

Task analysisHeart rateMultitaskingEmotion recognitionFace recognitionFeature extractionFacial animationArtificial neural networksFacial expression recognitionmulti-task learningheart ratedeep neural networkNEURAL-NETWORKVALENCEAROUSAL
제목
Multi-Task Learning by Leveraging Non-Contact Heart Rate for Robust Facial Emotion Recognition
저자
Ji, YerimDong, Suh-Yeon
DOI
10.1109/ACCESS.2024.3422403
발행일
2024-07
유형
Article
저널명
IEEE Access
12
페이지
92175 ~ 92180