0% found this document useful (0 votes)
41 views9 pages

Deep Fake Image Detection Using FaceForensics++

Reasearch papers
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
41 views9 pages

Deep Fake Image Detection Using FaceForensics++

Reasearch papers
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Available online at www.sciencedirect.

com
Available online at www.sciencedirect.com

ScienceDirect
ScienceDirect
Available online at www.sciencedirect.com

ScienceDirect
Procedia Computer Science 00 (2025) 000–000
Procedia Computer Science 00 (2025) 000–000 www.elsevier.com/locate/procedia
www.elsevier.com/locate/procedia
Procedia Computer Science 258 (2025) 3640–3648

International Conference on Machine Learning and Data Engineering


International Conference on Machine Learning and Data Engineering

A
A Framework
Framework for
for Deepfake
Deepfake Detection
Detection using
using Convolutional
Convolutional
Neural Network
Neural Network and
and Deep
Deep Features
Features

Soundarya
Soundarya B
BCC a,, Gururaj
a
Gururaj H
HLLa,, Naveen
a
Naveen Kumar
Kumar C
CM
b*
M b*
a
Department of Information Technology, Manipal Institute of Technology Bengaluru, Manipal Academy of Higher Education, Manipal, India
a
Department of Information
b Technology,
Department Manipal
of Computer Institute
Science of Technology
and Business System,Bengaluru, Manipal
Malnad College of Academy of Higher
Engineering, Education,
Hassan, India Manipal, India
b
Department of Computer Science and Business System, Malnad College of Engineering, Hassan, India

Abstract
Abstract
With the advancement of Artificial Intelligence, facial recognition has become a crucial biometric feature. Deepfake technology
With the advancement
leverages of Artificial
AI and can create Intelligence,
hyper-realistic facial
digitally recognition
manipulated has of
videos become
peopleaappearing
crucial biometric
to say or feature.
do thingsDeepfake
that nevertechnology
occurred.
leverages AI and can create hyper-realistic digitally manipulated videos of people appearing to say or do things
The emergence of Generative Adversarial Networks (GANs) has further enabled the creation of fake visual content with astonishingthat never occurred.
The emergence of Generative Adversarial Networks (GANs) has further enabled the creation of fake visual content
realism. This technology has diverse applications, such as in the film industry, where it allows for video recreation without with astonishing
realism. This
reshooting, technology
creating has diverse
awareness videos, applications,
restoring the such
voicesasofinthose
the film
who industry,
have lost where it allows
them, and for movie
updating video scenes
recreation without
at low cost.
reshooting, creating awareness videos, restoring the voices of those who have lost them, and updating movie scenes
However, this rapid advancement also presents significant challenges. The proliferation of synthetic images raises severe concerns at low cost.
However,
about theirthis rapid impact,
societal advancement also presents
particularly in termssignificant challenges.
of potential misuse for Theharassment
proliferation
andofblackmail.
synthetic images raises
Therefore, severe concerns
developing robust
about theirdetection
deepfake societal impact,
models particularly in terms
is imperative. of potential
This study misuse
evaluates the for harassmentofand
performance a blackmail. Therefore,model
proposed ResNet34 developing robust
in deepfake
deepfake detection models is imperative. This study evaluates the performance of a proposed ResNet34
detection. We utilize the FaceForensics++ dataset to train and assess the model, incorporating images generated by four popular model in deepfake
detection.techniques.
deepfake We utilize Our
the FaceForensics++ dataset
experimental results to train and
demonstrate thatassess the model,
integrating linear incorporating
ternary patterns images
(LTP)generated
and edge by four popular
detection-based
deepfake techniques. Our experimental results demonstrate that integrating linear ternary patterns (LTP)
features with the modified ResNet34 model achieves superior performance, attaining 97.5% accuracy and surpassing other and edge detection-based
features with the modified ResNet34 model achieves superior performance, attaining 97.5% accuracy and surpassing other
approaches.
approaches.
©
© 2025
2025 The
The Authors.
Authors. Published
Published by by ELSEVIER
Elsevier B.V.B.V.
© 2025 The Authors. Published by ELSEVIER B.V.
This
This is
is an
an open
open access
access article
article under
under the
the CC
CC BY-NC-ND license (https://creativecommons.org/licenses/by-nc-nd/4.0)
BY-NC-ND license (https://creativecommons.org/licenses/by-nc-nd/4.0)
This is an
Peer-reviewopen access article
underresponsibilityunder
responsibilityofofthethe CC BY-NC-ND
thescientific
scientific license
committee (https://creativecommons.org/licenses/by-nc-nd/4.0)
of the International Conference on Machine Learning andEngineering
Data
Peer-review
Peer-review under committee of the International Conference on Machine Learning and Data
Engineering under responsibility of the scientific committee of the International Conference on Machine Learning and Data
Engineering
Keywords: Deepfake; Artificial intelligence; Deep learning
Keywords: Deepfake; Artificial intelligence; Deep learning

1. Introduction
1. Introduction
In recent years, the rapid advancement of artificial intelligence has led to significant developments in media
In recent years, the rapid advancement of artificial intelligence has led to significant developments in media
generation technologies, one of the most notable being deepfakes. Deepfakes use AI, particularly GANs, to create
generation technologies, one of the most notable being deepfakes. Deepfakes use AI, particularly GANs, to create
highly realistic synthetic images, audio, and video content. These technologies allow media manipulation to depict
highly realistic synthetic images, audio, and video content. These technologies allow media manipulation to depict
people saying or doing things they never did, often with near-perfect realism.
people saying or doing things they never did, often with near-perfect realism.

1877-0509 © 2025 The Authors. Published by ELSEVIER B.V.


1877-0509 © 2025
This is an open Thearticle
access Authors. Published
under by ELSEVIER
the CC BY-NC-ND B.V. (https://creativecommons.org/licenses/by-nc-nd/4.0)
license
This is an open
Peer-review access
under article under
responsibility the scientific
of the CC BY-NC-ND license
committee (https://creativecommons.org/licenses/by-nc-nd/4.0)
of the International Conference on Machine Learning and Data Engineering
Peer-review under responsibility of the scientific committee of the International Conference on Machine Learning and Data Engineering
1877-0509 © 2025 The Authors. Published by Elsevier B.V.
This is an open access article under the CC BY-NC-ND license (https://creativecommons.org/licenses/by-nc-nd/4.0)
Peer-review under responsibility of the scientific committee of the International Conference on Machine Learning and
Data Engineering
10.1016/j.procs.2025.04.619
Soundarya B C et al. / Procedia Computer Science 258 (2025) 3640–3648 3641
2 / Procedia Computer Science 00 (2025) 000–000

While deepfakes have beneficial applications, such as in the entertainment industry for special effects or voice
restoration, they also pose serious threats. The ability to create convincing fake media has raised significant concerns
about misinformation, privacy breaches, and even threats to public safety. Deepfakes have been used to spread fake
news, create non-consensual explicit content, and manipulate public opinion, highlighting the urgent need for effective
detection methods.

Detecting deepfakes is a complex challenge due to the ever-improving quality of synthetic content. As deepfake
technology evolves, the differences between real and fake media become increasingly difficult for humans and
automated systems to detect. This has led to a growing body of research focused on developing reliable and robust
deepfake detection techniques.

In this study, we aim to contribute to this field by exploring advanced deep-learning models for deepfake detection.
We focus on the ResNet34 architecture and utilize the FaceForensics++ dataset, which contains a diverse set of
manipulated media, to train and evaluate our model. Our approach also integrates additional features, such as Linear
Ternary Patterns (LTP) and edge detection, to enhance the detection accuracy.

The results of our research demonstrate promising performance, with our model achieving high accuracy in
distinguishing between authentic and manipulated media. However, the ongoing advancements in deepfake generation
necessitate continuous improvements in detection methods to avoid potential misuse.

The structure of the paper is organized as follows. Section II provides an overview of related works conducted in
deepfake detection, emphasizing the importance of automated detection methods. Section III presents our proposed
model and provides a detailed description of the current study. The analysis of results obtained for test images is given
in Section IV. Finally, Section V concludes the work and outlines potential future directions for research in this area.

2. Literature Survey

In the existing research, various ways to detect deepfakes have been investigated. Nonetheless, deep learning-based
detection algorithms have gained popularity in various real-time applications due to their high accuracy. This section
will delve into important literature relevant to our research, illuminating the evolution and usefulness of these complex
detection approaches.

The rapid advancement of alteration technology has made it increasingly difficult to differentiate between altered
facial images [7]. The emergence of tools for creating faces has enabled anyone to generate convincing fake photos
posing significant societal risks [9]. These manipulated images can spread misinformation and deception, leading to
consequences. In contrast to identification methods, Convolutional Neural Networks (CNNs) provide an effective
solution for detecting deepfake content by analyzing visual cues [10]. Computers can detect alterations and patterns
in images that may go unnoticed by the eye.

A study in [11] showcased the effectiveness of CNNs in identifying deepfakes. They achieved a precision rate of
70% across three image datasets by employing CNN and tuning techniques. This highlights the capability of networks
to detect elements accurately, thereby mitigating the risks associated with the widespread dissemination of
manipulated facial images in different settings.

Researchers employed various methods to identify Deepfakes by leveraging the DFDC database [12]. Their
findings indicated that using the EfficientNet B7 fusion strategy, as detailed in a reference, yielded outcomes [13].
Sabir et al. [14] proposed a recurrent neural network. This network examines the sequence of frames in videos to
detect Deepfake modifications.

The authors proposed a novel way to detect false face photos utilizing a Variational Autoencoder (VAE)
architecture within the framework of one-class anomaly detection [15]. Unlike standard binary classification tasks,
their model, OC-FakeDect, is trained solely on actual facial photos. Then, it is used to determine whether unseen face
3642 Soundarya B C et al. / Procedia Computer Science 258 (2025) 3640–3648
/ Procedia Computer Science 00 (2025) 000–000 3

images are potential abnormalities, indicating the presence of counterfeits. Furthermore, the DenseNet architecture
has been demonstrated to be computationally more efficient due to its feed-forward design network, in which each
layer is linked to every other layer [16].

The authors in [17] proposed two separate approaches for identifying images produced by Generative Adversarial
Networks. Initially, they used statistical analysis to compare raw pixel values and retrieved characteristics. They then
proved that false samples differ from the properties that detected inaccurate data, giving a second method for
identification.

Many researchers have investigated different methods for detecting deepfake. People typically blink once every
few seconds, whereas deepfake films have fewer blinks. Jung et al. [18] noted this disparity. Another indicator could
be the color of the eyes. Matern et al. [19] discovered that the hair and eye hues can change significantly in bogus
photos, unlike in natural images. They devised a way of focusing on the eyes and looking for discrepancies in eye
color. This makes distinguishing between deepfake movies or pictures and real ones easier. The summary of existing
Deep Learning-Based Detection Methods for Deepfakes are shown in Table 1.

Table 1. Comparative Analysis.


Method Approach Dataset Key Features
CNN Analyzes visual cues in Various Effective in detecting alterations in
images datasets facial images
EfficientNet B7 Fusion strategy for DFDC Combines features for enhanced
Fusion improved detection accuracy

RNN Analyzes sequence of Video Detects temporal inconsistencies in


frames in videos datasets deepfake videos
Variational One-class anomaly Real facial Trained solely on real images to
Autoencoder detection approach photos identify anomalies
(VAE)
DenseNet Computationally efficient Various Each layer connected to every other
feed-forward design datasets layer
Statistical Compares raw pixel values Generated Differentiates between real and
Analysis and extracted features samples generated images
Eye Color Focuses on eye color Various Exploits known inconsistencies in
Discrepancy changes in deepfakes datasets eye color

3. Proposed Methodology

The system flow diagram of the proposed approach is depicted in Figure 1. The main objective of this study was
to determine whether an image is authentic or if it has been created using deepfake technology. In the initial stage, the
input images undergo preprocessing steps such as resizing, mean subtraction, and normalization. Next, the model was
trained using various face images, including natural images and those generated by various GANs [20]. In the third
stage, the pre-trained model, augmented with features from the ResNet34 model, distinguishes between real and fake
images. Finally, the performance of the model is evaluated using test data. This evaluation involves estimating the
likelihood of changes observed in generated images.
Soundarya B C et al. / Procedia Computer Science 258 (2025) 3640–3648 3643
4 / Procedia Computer Science 00 (2025) 000–000

Fig. 1. Proposed Architecture.

3.1. Dataset

The datasets included 25,000 photos taken from the Kaggle FaceForensics++ (FF++) dataset [7]. Among them,
15,550 images were designated for training, and 6,450 images were set aside for evaluating the model's functionality.
Furthermore, 3,000 pictures were set aside to confirm the model's efficacy. In Figure 2, the dataset examples are
shown.

Fig. 2. FF++ Dataset.

3.2. Data Pre-Processing

Data preprocessing is a crucial step in developing every machine learning model. It involves transforming raw data
into a structure in which the model can be trained. The preprocessing steps listed below are performed in our suggested
approach:
● Image resizing: Resizing images involves saving the recognized area as a picture, which is the subsequent
3644 Soundarya B C et al. / Procedia Computer Science 258 (2025) 3640–3648
/ Procedia Computer Science 00 (2025) 000–000 5

phase. The facial pictures are adjusted before being stored. A 299 × 299 image is essential for the ResNet34
model. Sample facial images extracted from the frames are depicted in Figure 2.
● Mean subtraction: We subtract 0.5 from each pixel value to guarantee that the average pixel intensity in the
image is zero. In addition to preventing biases in the model, this step helps to center the pixel values around
zero.
● Normalization: To do normalization, divide each pixel value by 255. Scaling the pixel values to a range of
0 to 1 helps avoid problems during training like gradient explosion or vanishing. Further normalization is
used after mean subtraction to scale the data to a specified range, like [0, 1] or [-1, 1].
Thus, images are converted into a format more suitable for the model to learn from by carrying out these preprocessing
techniques. By doing this, we can be sure that the data is ready for training and that the model can learn and generalize
from the photos more successfully.

3.3. Model training

● CNN: CNN is an outstanding deep learning method designed especially for examining visual input, like
images. It uses several filter layers and pooling procedures to extract significant features from the input data
[21, 22]. With this architecture, CNNs can efficiently and accurately carry out tasks like object identification,
segmentation, and image classification by effectively capturing hierarchical patterns and structures found in
images.

● Dlib: Dlib possesses algorithms for identifying landmarks that detect key facial features such as the mouth,
nose, and eyes. Utilizing this information enables the analysis of expressions and the detection of anomalies
that may indicate the presence of a deepfake.

● Dropout Regularization: Dropout is a method employed in developing neural network algorithms to combat
overfitting. It involves randomly deactivating input neurons by setting their outputs to zero during training,
thereby improving their capacity for generalization.

● Local Ternary Patterns (LTP): The model utilizes Local Ternary Patterns (LTP) to capture texture details
by encoding intensity changes as patterns. LTP is favored for its ability to withstand noise and lighting
variations, making it practical to detect variances in authentic and synthetic media content.

● Edge Detection: We use edge detection methods like the Canny edge detector to find unexpected changes
in pixel intensity gradients. We can identify changes or distortions brought about by deepfake manipulation
in object boundaries or facial features by examining the generated edge maps. Edge detection is a valuable
tool for spotting abnormal changes using deepfake techniques since it helps draw attention to the boundaries
between various sections in an image.

● ResNet34: A sophisticated neural network design called ResNet34 is typically employed for image
recognition tasks. It is well known for its ability to recognize complex patterns in photos and sticks out from
the crowd thanks to its 34 layers, which make it more profound than many other networks. ResNet34 is very
good at learning from examples because of its depth. It uses unique methods called skip connections to keep
from getting stuck while training, which promotes better learning.

3.4. Pretrained Model

The ResNet34 model is built to efficiently capture details from input image data, setting it apart from CNN
architectures by integrating skip connections to tackle challenges associated with training deep networks. The initial
segment of our ResNet34 design comprises sets of layers three with filter sizes of 64, 128, 256, and 512. These
convolutional layers are paired with batch normalization and ReLU activation functions to improve learning capacity.
Residual blocks, consisting of layers batch normalization, ReLU activation, and shortcut connections, are inserted
between the layers. These shortcuts facilitate direct information flow across layers, addressing issues like the vanishing
Soundarya B C et al. / Procedia Computer Science 258 (2025) 3640–3648 3645
6 / Procedia Computer Science 00 (2025) 000–000

gradient problem. Following residual blocks, global average pooling is employed to reduce spatial dimensions, then
flattening to a 1D vector to prepare for fully connected layers. The fully connected layers consist of two layers, with
the final layer containing nodes corresponding to the classification task's class count. A Softmax activation function
is applied to produce class probabilities. ReLU activation functions are used throughout the network, except for the
output layer, which employs Softmax. The Xavier initialization method ensures stable training, effectively initializing
network weights for efficient learning and convergence.

Many deep learning models and frameworks are now available. We chose ResNet34 for our experiments in this
paper for several vital reasons. First, ResNet34 performs well according to Deepfake test environment benchmarks.
Deepfake provides a platform where researchers can test their models, and Figure 3 shows how different models
perform. ResNet34 stood out for its high performance compared to other models, making it a strong choice for our
experiments.

1. Optimized Model
Improving the ResNet34 framework for detecting deepfake faces begins by evaluating how well the
existing ResNet34 model performs its intended task. Transfer learning is utilized to enhance the model's
capabilities. This involves removing the connected layers from the original ResNet34 model and using
the remaining layers to compute the layer with flattened vectors, creating concise feature representations
for input images. Global Average Pooling (GAP) is then employed to summarize feature maps and
incorporate a Softmax layer for classification (real versus fake).
Enhancements have been applied to the ResNet34 design to enhance feature extraction, such as integrating
dropout layers to address overfitting. The exploration and implementation of feature extraction techniques
focused on detecting deepfakes have also been pursued. These methods are merged with the results from
the pooling layer of ResNet34, enabling the model to leverage prior learned patterns and distinctive
features identified through methods.

2. Fine-tuning model
The model is adjusted by keeping the ResNet34 layers static and training new layers to address the
detection of deepfake content. Additionally, supplementary attributes computed with ResNet34s attributes
are merged into a representation and forwarded to a fresh, fully connected layer for classification. Lastly,
the model undergoes optimization using the feature vectors derived from ResNet34 and other feature
extractors with tweaks to the hyperparameters for enhanced effectiveness. This approach is focused on
enhancing the effectiveness and reliability of ResNet34 in identifying deepfake images by utilizing trained
weights and advanced feature extraction methods. ResNet34 demonstrates performance across FF++
datasets and is particularly preferred due to its accessibility as an open-source tool, with comprehensive
documentation simplifying model training and adaptation.

4. Results and Discussion

The proposed work uses a machine with a high-end processor (1GHz) and GPU (4GB- NVIDIA GeForce GTX
1050 Ti). Python 3.10 programming language is used in Anaconda Environment. The Python Libraries, viz., NumPy,
Pandas, Matplotlib, Scikit- Learn, StreamLit, OpenCV, and Deep Learning framework, viz., TensorFlow, and Keras,
are used for implementation.

For each dataset associated with deepfake image generation methods, a subset comprising 20% of the videos was
designated and reserved for final evaluation. Additionally, the model trained using the Deepfakes dataset underwent
evaluation using the other three datasets to ascertain whether the detection performance exhibited any overfitting
tendencies concerning the fake image generation methods. Instead of solely focusing on the model's accuracy as a
binary classification, the evaluation process prioritized determining the actual positive rate (TPR) and valid negative
rate (TNR) of the classification outcomes.
3646 Soundarya B C et al. / Procedia Computer Science 258 (2025) 3640–3648
/ Procedia Computer Science 00 (2025) 000–000 7

Figure 2 displays sample images from various classes. Initially, the pre-trained CNN, ResNet34, and a modified
version of ResNet34 are evaluated for their initial performance. The results obtained from this evaluation are presented
in Table 2 for reference.

Table 2. Performance Analysis.


Model Accuracy F1-score
CNN 95.2 2.63
ResNet34 97.15 3.21
ResNet34+LTP+Edge 97.50 4.56
detection

98

97.5

97

96.5
Accuracy

96

95.5

95

94.5

94
CNN ResNet34 ResNet34+LTP+Edge
detection
Models

Fig. 3. Accuracy comparison

F1-Score

ResNet34+LTP+Edge detection

ResNet34
Models

CNN

0 1 2 3 4 5
F1-Score values

Fig. 4. F1-Score Comparison.

The tuned ResNet34 model achieved outstanding performance, accurately predicting all images from the dashboard
Soundarya B C et al. / Procedia Computer Science 258 (2025) 3640–3648 3647
8 / Procedia Computer Science 00 (2025) 000–000

camera with an impressive accuracy of 97.5%, as shown in Figures 3 and 4. Notably, all predictions were made on an
image generated from the pictures of the testing dataset, demonstrating clarity and freedom from noise or blurring in
the outputs. Figure 5 shows the accuracy curve for the proposed model.

Fig. 5. Accuracy Curve.

When comparing the results between ResNet34 and ResNet34 models with various features, it becomes evident
that ResNet34 with additional features exhibited a slightly higher detection rate, surpassing ResNet34 by
approximately 1%. However, ResNet34 experienced a decrease in accuracy by 2-3% compared to the fine-tuned
ResNet34 model. Nonetheless, the advantages of employing a significantly smaller model may outweigh these
differences, particularly in scenarios where the end application necessitates a mobile application.

5. Conclusion

In conclusion, this research focused on deepfake detection using advanced deep learning techniques, emphasizing
the ResNet34 architecture. We evaluated ResNet34's effectiveness in distinguishing authentic from manipulated
media content generated by various deepfake techniques through meticulous experimentation and analysis. Our
findings indicate that augmenting ResNet34 with additional features slightly improves detection rates for deepfake
images compared to the baseline model. However, detection accuracy for other datasets experienced a minor decrease
of 2-3% compared to the fine-tuned ResNet34 model. Notably, the tuned ResNet34 model demonstrated exceptional
accuracy of 97.50% in predicting dashboard camera images, highlighting its reliability in real-world applications.
In future work, we can enhance the evaluation of the model's performance by employing cross-validation
techniques.

Acknowledgements

There is no conflict of interest.

References

[1] Guarnera, Luca, Oliver Giudice, and Sebastiano Battiato. "Fighting deepfake by exposing the convolutional traces on images." IEEE Access
8 (2020): 165085-165098.
[2] Kohli, Aditi, and Abhinav Gupta. "Detecting deepfake, faceswap and face2face facial forgeries using frequency cnn." Multimedia Tools
and Applications 80, no. 12 (2021): 18461-18478.
3648 Soundarya B C et al. / Procedia Computer Science 258 (2025) 3640–3648
/ Procedia Computer Science 00 (2025) 000–000 9

[3] Ismail, Aya, Marwa Elpeltagy, Mervat S. Zaki, and Kamal Eldahshan. "A new deep learning-based methodology for video deepfake
detection using XGBoost." Sensors 21, no. 16 (2021): 5413.
[4] Taeb, Maryam, and Hongmei Chi. "Comparison of deepfake detection techniques through deep learning." Journal of Cybersecurity and
Privacy 2, no. 1 (2022): 89-106.
[5] Seferbekov, S. DFDC 1st Place Solution. Available online: https://www.kaggle.com/c/deepfake-detection-challenge (accessed on 1 March
2022).
[6] Raza, Ali, Kashif Munir, and Mubarak Almutairi. "A novel deep learning approach for deepfake image detection." Applied Sciences 12, no.
19 (2022): 9820.
[7] Wu, Jian, Kai Feng, Xu Chang, and Tongfeng Yang. "A forensic method for deepfake image based on face recognition." In Proceedings of
the 2020 4th High Performance Computing and Cluster Technologies Conference & 2020 3rd International Conference on Big Data and
Artificial Intelligence, pp. 104-108. 2020.
[8] Chang, Xu, Jian Wu, Tongfeng Yang, and Guorui Feng. "Deepfake face image detection based on improved VGG convolutional neural
network." In 2020 39th chinese control conference (CCC), pp. 7252-7256. IEEE, 2020.
[9] Montserrat, Daniel Mas, Hanxiang Hao, Sri K. Yarlagadda, Sriram Baireddy, Ruiting Shao, János Horváth, Emily Bartusiak et al.
"Deepfakes detection with automatic face weighting." In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition
workshops, pp. 668-669. 2020.
[10] Güera, David, and Edward J. Delp. "Deepfake video detection using recurrent neural networks." In 2018 15th IEEE international conference
on advanced video and signal based surveillance (AVSS), pp. 1-6. IEEE, 2018.
[11] Kohli, Aditi, and Abhinav Gupta. "Detecting deepfake, faceswap and face2face facial forgeries using frequency cnn." Multimedia Tools
and Applications 80, no. 12 (2021): 18461-18478.
[12] Dolhansky, Brian, Joanna Bitton, Ben Pflaum, Jikuo Lu, Russ Howes, Menglin Wang, and Cristian Canton Ferrer. "The deepfake detection
challenge (dfdc) dataset." arXiv preprint arXiv:2006.07397 (2020).
[13] Pokroy, Artem A., and Alexey D. Egorov. "EfficientNets for deepfake detection: Comparison of pretrained models." In 2021 IEEE
conference of russian young researchers in electrical and electronic engineering (ElConRus), pp. 598-600. IEEE, 2021.
[14] Sabir, E., Cheng, J., Jaiswal, A., AbdAlmageed, W., Masi, I., Natarajan, P., 2019. Recurrent convolutional strategies for face manipulation
detection in videos. In: Proc. IEEE/CVF Conference On Computer Vision And Pattern Recognition Workshops.
[15] Khalid, Hasam, and Simon S. Woo. "Oc-fakedect: Classifying deepfakes using one-class variational autoencoder." In Proceedings of the
IEEE/CVF conference on computer vision and pattern recognition workshops, pp. 656-657. 2020.
[16] Pasupuleti, Venkat Rao, Prasanth Reddy Tathireddy, Gopi Dontagani, and Shaik Abdul Rahim. "Deepfake Detection Using Custom
Densenet." In 2023 14th International Conference on Computing Communication and Networking Technologies (ICCCNT), pp. 1-5. IEEE,
2023.
[17] Rana, Md Shohel, and Andrew H. Sung. "Deepfakestack: A deep ensemble-based learning technique for deepfake detection." In 2020 7th
IEEE international conference on cyber security and cloud computing (CSCloud)/2020 6th IEEE international conference on edge computing
and scalable cloud (EdgeCom), pp. 70-75. IEEE, 2020.
[18] Jung, Tackhyun, Sangwon Kim, and Keecheon Kim. "Deepvision: Deepfakes detection using human eye blinking pattern." IEEE Access
8 (2020): 83144-83154.
[19] Matern, F.; Riess, C.; Stamminger, M. Exploiting visual artifacts to expose deepfakes and face manipulations. In Proceedings of the 2019
IEEE Winter Applications of Computer Vision Workshops (WACVW), Waikoloa Village, HI, USA, 1–7 January 2019.
[20] Zhao, Hanqing, Wenbo Zhou, Dongdong Chen, Tianyi Wei, Weiming Zhang, and Nenghai Yu. "Multi-attentional deepfake detection." In
Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 2185-2194. 2021.
[21] Pan, Deng, Lixian Sun, Rui Wang, Xingjian Zhang, and Richard O. Sinnott. "Deepfake detection through deep learning." In 2020
IEEE/ACM International Conference on Big Data Computing, Applications and Technologies (BDCAT), pp. 134-143. IEEE, 2020.
[22] Nirkin, Yuval, Lior Wolf, Yosi Keller, and Tal Hassner. "Deepfake detection based on the discrepancy between the face and its context."
arXiv preprint arXiv:2008.12262 (2020).

You might also like