Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

DeepFake detection algorithm based on improved vision transformer

Full metadata record
DC Field Value Language
dc.contributor.authorHeo, Young-Jin-
dc.contributor.authorYeo, Woon-Ha-
dc.contributor.authorKim, Byung-Gyu-
dc.date.accessioned2023-11-08T06:45:51Z-
dc.date.available2023-11-08T06:45:51Z-
dc.date.issued2023-04-01-
dc.identifier.issn0924-669X-
dc.identifier.issn1573-7497-
dc.identifier.urihttps://scholarworks.sookmyung.ac.kr/handle/2020.sw.sookmyung/151897-
dc.description.abstractA DeepFake is a manipulated video made with generative deep learning technologies, such as generative adversarial networks or auto encoders that anyone can utilize. With the increase in DeepFakes, classifiers consisting of convolutional neural networks (CNN) that can distinguish them have been actively created. However, CNNs have a problem with overfitting and cannot consider the relation between local regions as global feature of image, resulting in misclassification. In this paper, we propose an efficient vision transformer model for DeepFake detection to extract both local and global features. We combine vector-concatenated CNN feature and patch-based positioning to interact with all positions to specify the artifact region. For the distillation token, the logit is trained using binary cross entropy through the sigmoid function. By adding this distillation, the proposed model is generalized to improve performance. From experiments, the proposed model outperforms the SOTA model by 0.006 AUC and 0.013 f1 score on the DFDC test dataset. For 2,500 fake videos, the proposed model correctly predicts 2,313 as fake, whereas the SOTA model predicts 2,276 in the best performance. With the ensemble method, the proposed model outperformed the SOTA model by 0.01 AUC. For Celeb-DF (v2) dataset, the proposed model achieves a high performance of 0.993 AUC and 0.978 f1 score, respectively.-
dc.format.extent16-
dc.language영어-
dc.language.isoENG-
dc.publisherSPRINGER-
dc.titleDeepFake detection algorithm based on improved vision transformer-
dc.typeArticle-
dc.publisher.location네델란드-
dc.identifier.doi10.1007/s10489-022-03867-9-
dc.identifier.scopusid2-s2.0-85134994902-
dc.identifier.wosid000830340500005-
dc.identifier.bibliographicCitationAPPLIED INTELLIGENCE, v.53, no.7, pp 7512 - 7527-
dc.citation.titleAPPLIED INTELLIGENCE-
dc.citation.volume53-
dc.citation.number7-
dc.citation.startPage7512-
dc.citation.endPage7527-
dc.type.docTypeArticle; Early Access-
dc.description.isOpenAccessN-
dc.description.journalRegisteredClassscie-
dc.description.journalRegisteredClassscopus-
dc.relation.journalResearchAreaComputer Science-
dc.relation.journalWebOfScienceCategoryComputer Science, Artificial Intelligence-
dc.subject.keywordAuthorDeep learning-
dc.subject.keywordAuthorDeepfake detection-
dc.subject.keywordAuthorDistillation-
dc.subject.keywordAuthorGenerative adversarial network-
dc.subject.keywordAuthorVision transformer-
Files in This Item
There are no files associated with this item.
Appears in
Collections
ETC > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Kim, Byung Gyu photo

Kim, Byung Gyu
공과대학 (인공지능공학부)
Read more

Altmetrics

Total Views & Downloads

BROWSE