Original scientific paper
https://doi.org/10.32985/ijeces.13.9.9
ResViT: A Framework for Deepfake Videos Detection
Wasim Ahmad
orcid.org/0000-0002-4841-6481
; Institute of Information Sciences, Academia Sinica, Taiwan Department of Computer Science, National ChengChi University, Taiwan
Imad Ali
; Department of Computer Science, University of Swat, KP, Pakistan
Adil Shahzad
; Institute of Information Sciences, Academia Sinica, Taiwan Department of Computer Science, National Chengchi University, Taiwan
Ammarah Hashmi
; Information Sciences, Academia Sinica, Taiwan Institute of Human-Centered Computing, National Tsing Hua University, Taiwan
Faisal Ghaffar
; System Design Engineering Department, University of Waterloo, Canada
Abstract
Deepfake makes it quite easy to synthesize videos or images using deep learning techniques, which leads to substantial danger and worry for most of the world's renowned people. Spreading false news or synthesizing one's video or image can harm people and their lack of trust on social and electronic media. To efficiently identify deepfake images, we propose ResViT, which uses the ResNet model for feature extraction, while the vision transformer is used for classification. The ResViT architecture uses the feature extractor to extract features from the images of the videos, which are used to classify the input as fake or real. Moreover, the ResViT architectures focus equally on data pre-processing, as it improves performance. We conducted extensive experiments on the five mostly used datasets our results with the baseline model on the following datasets of Celeb-DF, Celeb-DFv2, FaceForensics++, FF-Deepfake Detection, and DFDC2. Our analysis revealed that ResViT performed better than the baseline and achieved the prediction accuracy of 80.48%, 87.23%, 75.62%, 78.45%, and 84.55% on Celeb-DF, Celeb-DFv2, FaceForensics++, FF-Deepfake Detection, and DFDC2 datasets, respectively.
Keywords
deepfake; detection; vision transformer; GAN;
Hrčak ID:
286300
URI
Publication date:
6.12.2022.
Visits: 910 *