Video-Based Stabilized 3D Face Alignment Using Temporal Multi-Discrimination
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Lee, Seongmin | - |
dc.contributor.author | Yoon, Hyunse | - |
dc.contributor.author | Kang, Jiwoo | - |
dc.contributor.author | Kim, Jungsu | - |
dc.contributor.author | Son, Jiwan | - |
dc.contributor.author | Huh, Jungwoo | - |
dc.contributor.author | Lee, Sanghoon | - |
dc.date.accessioned | 2024-01-29T08:00:29Z | - |
dc.date.available | 2024-01-29T08:00:29Z | - |
dc.date.issued | 2023-09 | - |
dc.identifier.issn | 2163-3517 | - |
dc.identifier.issn | 2473-3628 | - |
dc.identifier.uri | https://scholarworks.sookmyung.ac.kr/handle/2020.sw.sookmyung/159618 | - |
dc.description.abstract | Existing 3D face alignment primarily aim to achieve accurate face alignment result for a static facial image. While these methods have strong alignment performance under large poses, occlusion, and extreme lighting conditions, they often result in trembling artifacts in video-based sequential 3D face alignment. Reducing temporal misalignment remains a challenging task because a single misaligned frame can propagate errors to other frames along the temporal axis. To address this issue, we propose a novel temporal discriminating scheme that learns the distribution gap between the face alignment results and ground truth face animation. By leveraging the discrimination results as a guide, the proposed method can effectively align the 3D faces to the input video by reducing temporal trembling artifacts. To effectively learn the distribution gap, we introduce a multi-discriminating scheme that separately discriminates facial animation based on identity and expression changes. It enables the proposed method to produce a stabilized alignment result, especially in dynamic and fast movement. Through extensive experiments in both qualitative and quantitative evaluations, it is confirmed that our method outperforms state-of-the-art 3D face alignment methods by animating stabilized results in the video. | - |
dc.format.extent | 6 | - |
dc.language | 영어 | - |
dc.language.iso | ENG | - |
dc.publisher | Institute of Electrical and Electronics Engineers Inc. | - |
dc.title | Video-Based Stabilized 3D Face Alignment Using Temporal Multi-Discrimination | - |
dc.type | Article | - |
dc.publisher.location | 미국 | - |
dc.identifier.doi | 10.1109/MMSP59012.2023.10337645 | - |
dc.identifier.scopusid | 2-s2.0-85181587265 | - |
dc.identifier.bibliographicCitation | 2023 IEEE 25th International Workshop on Multimedia Signal Processing, MMSP 2023, v.2023, pp 1 - 6 | - |
dc.citation.title | 2023 IEEE 25th International Workshop on Multimedia Signal Processing, MMSP 2023 | - |
dc.citation.volume | 2023 | - |
dc.citation.startPage | 1 | - |
dc.citation.endPage | 6 | - |
dc.type.docType | Conference Paper | - |
dc.description.isOpenAccess | N | - |
dc.description.journalRegisteredClass | scopus | - |
dc.subject.keywordAuthor | 3D face alignment | - |
dc.subject.keywordAuthor | stabilized face alignment | - |
dc.subject.keywordAuthor | temporal discrimination | - |
dc.identifier.url | https://ieeexplore.ieee.org/document/10337645/ | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
Sookmyung Women's University. Cheongpa-ro 47-gil 100 (Cheongpa-dong 2ga), Yongsan-gu, Seoul, 04310, Korea02-710-9127
Copyright©Sookmyung Women's University. All Rights Reserved.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.