상세 보기
- Ji, Yerim;
- Dong, Suh-Yeon
WEB OF SCIENCE
2SCOPUS
4초록
Building a robust facial expression recognition (FER) system remains a challenging problem due to the emotional ambiguity of facial expressions. Recent approaches employ both facial expressions and physiological signals to design multi-modal emotion recognition systems. However, these approaches require physical contact with the skin as they need to use sensor modalities. To meet the demands for a non-contact emotion recognition system, we use a convolutional recurrent neural network (CRNN) to extract facial features and utilize these features for estimating the heart rate (HR) from face image sequences. In particular, unlike the conventional feature fusion method, we propose a multi-task learning (MTL) framework to simultaneously predict the emotion and HR from face image sequences using a single model. Experiments on the DEAP and MAHNOB-HCI datasets demonstrate that the proposed multi-task framework improves FER accuracy by up to 6.85% and achieves superior performance against the state-of-the-art methods.
키워드
- 제목
- Multi-Task Learning by Leveraging Non-Contact Heart Rate for Robust Facial Emotion Recognition
- 저자
- Ji, Yerim; Dong, Suh-Yeon
- 발행일
- 2024-07
- 유형
- Article
- 저널명
- IEEE Access
- 권
- 12
- 페이지
- 92175 ~ 92180