Coot Bird Optimization-Based ESkip-ResNet Classification for Deepfake Detection
DOI:
https://doi.org/10.47852/bonviewJCCE42022955Keywords:
Deepfake Detection Challenge, deep learning, image enhancement, Residual Network, Coot Bird OptimizationAbstract
With increased digitization comes an increase in the speed at which threats to the data are emerging. Although it can be challenging to identify, fake image creation doesn’t require any particular memory, computational equipment, or hardware. Consequently, this study uses deep learning to achieve accurate detection. In order to improve detection performance, the study strengthened the line separating the background from the object. It also used the adaptive 2D Wiener filter for preprocessing in order to attenuate noise that was unintentionally reinforced throughout the process of improving the image. This essay suggests an Efficient Skip Connections based Residual Network (ESkip-ResNet) by utilizing skip connections with the Residual Network (ResNet). The ESkip-ResNet architecture also has a number of stages and progressively more leftover blocks to enhance the classification process. ESkip-ResNet uses the remaining blocks of identity mapping through skip connections in the ResNet architecture. Additionally, ESkip-ResNet has effective techniques for downsampling and stable batch normalization layers, which both improve its stable and dependable performance. The Coot Bird Optimization (CBO) method is used to fine-tune the hyper-parameters of the proposed classifier. The suggested model, ESkip-ResNet, was proposed to be more sensible and to offer better performance. The ESkip-ResNet architecture also has a number of stages and progressively more leftover blocks to enhance the classification process. The proposed model achieved 98.9% and 98.8% accuracy and precision, respectively. Comprehensive test results demonstrate that CBO-based ESkip-ResNet outperforms other approaches in fake detection. The proposed research also took into account every kind of facial alteration, improving the model’s robustness, lightweight nature, and generalizability. It was able to identify every type of facial alteration found in images taken from the Deepfake Detection Challenge dataset.
Received: 28 March 2024 | Revised: 10 May 2024 | Accepted: 28 May 2024
Conflicts of Interest
The authors declare that they have no conflicts of interest to this work.
Data Availability Statement
The DFDC data that support the findings of this study are openly available at https://doi.org/10.48550/arXiv.2102.11126, reference number [20]. The Deepfake Detection Challenge datasets that support the findings of this study are openly available at https:// www.kaggle.com/c/deepfake-detection-challenge.
Author Contribution Statement
V. Gokula Krishnan: Conceptualization, Methodology, Writing – original draft, Writing – review & editing, Project administration, Funding acquisition. R. Vadivel: Software, Visualization. K. Sankar: Validation, Resources. K. Sathyamoorthy: Formal analysis, Data curation. B. Prathusha Laxmi: Investigation, Supervision.
Metrics
Downloads
Published
Issue
Section
License
Copyright (c) 2024 Authors
This work is licensed under a Creative Commons Attribution 4.0 International License.