상세 보기
- Lee, Seongmin;
- Yoon, Hyunse;
- Kang, Jiwoo;
- Kim, Jungsu;
- Son, Jiwan;
- 외 2명
WEB OF SCIENCE
0SCOPUS
2초록
Existing 3D face alignment primarily aim to achieve accurate face alignment result for a static facial image. While these methods have strong alignment performance under large poses, occlusion, and extreme lighting conditions, they often result in trembling artifacts in video-based sequential 3D face alignment. Reducing temporal misalignment remains a challenging task because a single misaligned frame can propagate errors to other frames along the temporal axis. To address this issue, we propose a novel temporal discriminating scheme that learns the distribution gap between the face alignment results and ground truth face animation. By leveraging the discrimination results as a guide, the proposed method can effectively align the 3D faces to the input video by reducing temporal trembling artifacts. To effectively learn the distribution gap, we introduce a multi-discriminating scheme that separately discriminates facial animation based on identity and expression changes. It enables the proposed method to produce a stabilized alignment result, especially in dynamic and fast movement. Through extensive experiments in both qualitative and quantitative evaluations, it is confirmed that our method outperforms state-of-the-art 3D face alignment methods by animating stabilized results in the video.
키워드
- 제목
- Video-Based Stabilized 3D Face Alignment Using Temporal Multi-Discrimination
- 저자
- Lee, Seongmin; Yoon, Hyunse; Kang, Jiwoo; Kim, Jungsu; Son, Jiwan; Huh, Jungwoo; Lee, Sanghoon
- 발행일
- 2023-09
- 유형
- Conference Paper
- 저널명
- 2023 IEEE 25th International Workshop on Multimedia Signal Processing, MMSP 2023
- 권
- 2023
- 페이지
- 1 ~ 6