DeepFake detection algorithm based on improved vision transformer
Citations

WEB OF SCIENCE

69
Citations

SCOPUS

98

초록

A DeepFake is a manipulated video made with generative deep learning technologies, such as generative adversarial networks or auto encoders that anyone can utilize. With the increase in DeepFakes, classifiers consisting of convolutional neural networks (CNN) that can distinguish them have been actively created. However, CNNs have a problem with overfitting and cannot consider the relation between local regions as global feature of image, resulting in misclassification. In this paper, we propose an efficient vision transformer model for DeepFake detection to extract both local and global features. We combine vector-concatenated CNN feature and patch-based positioning to interact with all positions to specify the artifact region. For the distillation token, the logit is trained using binary cross entropy through the sigmoid function. By adding this distillation, the proposed model is generalized to improve performance. From experiments, the proposed model outperforms the SOTA model by 0.006 AUC and 0.013 f1 score on the DFDC test dataset. For 2,500 fake videos, the proposed model correctly predicts 2,313 as fake, whereas the SOTA model predicts 2,276 in the best performance. With the ensemble method, the proposed model outperformed the SOTA model by 0.01 AUC. For Celeb-DF (v2) dataset, the proposed model achieves a high performance of 0.993 AUC and 0.978 f1 score, respectively.

키워드

Deep learningDeepfake detectionDistillationGenerative adversarial networkVision transformer
제목
DeepFake detection algorithm based on improved vision transformer
저자
Heo, Young-JinYeo, Woon-HaKim, Byung-Gyu
DOI
10.1007/s10489-022-03867-9
발행일
2023-04
유형
Article; Early Access
저널명
Applied Intelligence
53
7
페이지
7512 ~ 7527