Deepfake Detection Using Machine Learning Algorithms
Deepfake Detection Using Machine Learning Algorithms
'HHSIDNH'HWHFWLRQ8VLQJ0DFKLQH/HDUQLQJ
Deep fake Detection Using Machine Learning
$OJRULWKPV
Algorithms
0G6KRKHO5DQD
Md. Shohel Rana %HGGKX0XUDOL
Beddhu Murali $QGUHZ+6XQJ
Andrew H. Sung
6FKRRORI&RPSXWLQJ6FLHQFHVDQG&RPSXWHU
School of Computing Sciences and Computer 6FKRRORI&RPSXWLQJ6FLHQFHVDQG&RPSXWHU
School of Computing Sciences and Computer 6FKRRORI&RPSXWLQJ6FLHQFHVDQG&RPSXWHU
School of Computing Sciences and Computer
(QJLQHHULQJ
Engineering (QJLQHHULQJ
Engineering (QJLQHHULQJ
Engineering
7KH8QLYHUVLW\RI6RXWKHUQ0LVVLVVLSSL
The University of Southern Mississippi 7KH8QLYHUVLW\RI6RXWKHUQ0LVVLVVLSSL
The University of Southern Mississippi 7KH8QLYHUVLW\RI6RXWKHUQ0LVVLVVLSSL
The University of Southern Mississippi
+DWWLHVEXUJ0686$
Hattiesburg, MS 39406, USA +DWWLHVEXUJ0686$
Hattiesburg, MS 39406, USA +DWWLHVEXUJ0686$
Hattiesburg, MS 39406, USA
2021 10th International Congress on Advanced Applied Informatics (IIAI-AAI) | 978-1-6654-2420-2/21/$31.00 ©2021 IEEE | DOI: 10.1109/IIAI-AAI53430.2021.00079
PGUDQD#XVPHGX
[Link]@[Link] EHGGKXPXUDOL#XVPHGX
[Link]@[Link] DQGUHZVXQJ#XVPHGX
[Link]@[Link]
$EVWUDFW²'HHSIDNHDQHZYLGHRPDQLSXODWLRQWHFKQLTXHKDV
Abstract-Deepfake, a new video manipulation technique, has
LQSXWV
inputs. 7\SLFDOO\
Typically, Da WUDGHRII
tradeoff EHWZHHQ
between DFFXUDF\
accuracy DQG
and
GUDZQPXFKDWWHQWLRQUHFHQWO\$PRQJWKHXQODZIXORUQHIDULRXV
drawn much attention recently. Among the unlawful or nefarious
LQWHUSUHWDELOLW\
interpretability RFFXUV
occurs ZKHQ
when DQDO\]LQJ
analyzing DQ\
any FODVVLFDO
classical PDFKLQH
machine
DSSOLFDWLRQV
applications, 'HHSIDNH
Deepfake KDV has EHHQ
been XVHGused IRUfor VSUHDGLQJ
spreading
PLVLQIRUPDWLRQIRPHQWLQJSROLWLFDOGLVFRUGVPHDULQJRSSRQHQWV learning (ML)
OHDUQLQJ 0/ PHWKRG>@'/PHWKRGVDUHODQJXLGWRWUDLQDQG
method [3] . DL methods are languid to train and
misinformation, fomenting political discord, smearing opponents,
RU
or HYHQ
even EODFNPDLOLQJ
blackmailing. $VAs WKH
the WHFKQRORJ\
technology EHFRPHV
becomes PRUHmore
UHTXLUH
require PDQ\
many FRPSXWLQJ
computing UHVRXUFHV
resources, PDNLQJ
making WKHP
them UHVRXUFH
resource
VRSKLVWLFDWHGDQGWKHDSSVIRUFUHDWLQJWKHPHYHUPRUHDYDLODEOH
sophisticated and the apps for creating them ever more available,
LQWHQVLYHEHFDXVHRIWKHLUFRPSOH[LW\DODUJHQXPEHURIOD\HUV
intensive because of their complexity, a large number of layers,
GHWHFWLQJ
detecting 'HHSIDNH
Deepfake KDVhas EHFRPH
become D a FKDOOHQJLQJ
challenging WDVN
task, DQG
and and large volumes of data. On the other hand, ML methods are
DQGODUJHYROXPHVRIGDWD2QWKHRWKHUKDQG0/PHWKRGVDUH
DFFRUGLQJO\
accordingly UHVHDUFKHUV
researchers KDYH
have SURSRVHG
proposed YDULRXV
various GHHS
deep OHDUQLQJ
learning HDVLHUWRHYDOXDWHDQGXQGHUVWDQG
easier to evaluate and understand.
'/ PHWKRGVIRUGHWHFWLRQ7KRXJKWKH'/EDVHGDSSURDFKFDQ
(DL) methods for detection. Though the DL-based approach can 7KHDERYHFRQWH[WPRWLYDWHGXVWRLQWKLVSDSHUH[SHULPHQW
The above context motivated us to, in this paper, experiment
DFKLHYHJRRGVROXWLRQVWKLVSDSHUSUHVHQWVWKHUHVXOWVRIRXUVWXG\
achieve good solutions, this paper presents the results of our study
ZLWK
with DQG
and HYDOXDWH
evaluate D
a FODVVLFDO ML PHWKRG
classical 0/ method LQin GHWHFWLQJ
detecting
LQGLFDWLQJ
indicating WKDW
that WUDGLWLRQDO
traditional PDFKLQH
machine OHDUQLQJ 0/ WHFKQLTXHV
learning (ML) techniques
DORQHFDQREWDLQVXSHULRUSHUIRUPDQFHLQGHWHFWLQJ'HHSIDNH7KH
'HHSIDNHV7KHSURFHVVDQGWKHUHVXOWVDUHGHVFULEHGEHORZ
Deepfakes. The process and the results are described below:
alone can obtain superior performance in detecting Deepfake. The
0/EDVHGDSSURDFKLVEDVHGRQWKHVWDQGDUGPHWKRGVRIIHDWXUH
ML-based approach is based on the standard methods of feature •
• &UHDWH
Create Da XQLTXH
unique VHW
set RI
of IHDWXUHV
features E\
by FRPELQLQJ
combining +2*
HOG,
GHYHORSPHQW
development DQG and IHDWXUH
feature VHOHFWLRQ
selection, IROORZHG
followed E\
by WUDLQLQJ
training, WXQLQJ
tuning, +DUDOLF+X0RPHQWVDQG&RORU+LVWRJUDPIHDWXUHV
Haralic, Hu Moments, and Color Histogram features.
DQGWHVWLQJDQ0/FODVVLILHU7KHDGYDQWDJHRIWKH0/DSSURDFK
and testing an ML classifier. The advantage of the ML approach •
• /HVVHQ
Lessen WKH
the GDWD¶V
data's FRPSOH[LW\
complexity DQGand PDNLQJ
making SDWWHUQV
patterns
LVWKDWLWDOORZVEHWWHUXQGHUVWDQGDELOLW\DQGLQWHUSUHWDELOLW\RIWKH
is that it allows better understandability and interpretability of the
UHFRJQL]DEOHE\
recognizable by VSOLWWLQJ
splitting DWDVN
a task LQWR
into WZR
two VWDJHVREMHFW
stages: object
PRGHO
model ZLWK
with UHGXFHG
reduced FRPSXWDWLRQDO
computational FRVW
cost. :H
We SUHVHQW
present UHVXOWV
results RQ
on
VHYHUDO
GHWHFWLRQDQGREMHFWUHFRJQLWLRQ
detection and object recognition.
several 'HHSIDNH
Deepfake GDWDVHWV
datasets WKDW
that DUH
are REWDLQHG
obtained UHODWLYHO\
relatively IDVW
fast ZLWK
with
>- 7KHREMHFWGHWHFWLRQSKDVHVFDQVDQHQWLUHLPDJHDQG
¾ The object detection phase scans an entire image and
FRPSDUDEOH
comparable RU or VXSHULRU
superior SHUIRUPDQFH
performance WR to WKH
the VWDWHRIWKHDUW
state-of-the-art '/
DL
EDVHG
based PHWKRGV
methods:
99.84% DFFXUDF\
accuracy RQ
on )DFH)RUHFLFV
FaceForecics++,
99.38%
LGHQWLILHVDOOSRVVLEOHREMHFWV
identifies all possible objects.
DFFXUDF\RQ')'&DFFXUDF\RQ9')'DQGRQ >- 7KHREMHFWUHFRJQLWLRQVWHSLGHQWLILHVUHOHYDQWREMHFWV
¾ The object recognition step identifies relevant objects.
accuracy on DFDC, 99.66% accuracy on VDFD, and 99.43 % on
&HOHE')GDWDVHWV2XUUHVXOWVVXJJHVWWKDWDQHIIHFWLYHV\VWHPIRU •
• The advantages of the ML method:
7KHDGYDQWDJHVRIWKH0/PHWKRG
Celeb-DF datasets. Our results suggest that an effective system for
GHWHFWLQJ'HHSIDNHVFDQEHEXLOWXVLQJWUDGLWLRQDO0/PHWKRGV
detecting Deepfakes can be built using traditional ML methods. >- 3URYLGH
¾ Provide EHWWHU
better XQGHUVWDQGDELOLW\
understandability DQG
and LQWHUSUHWDELOLW\
interpretability
RIWKHPRGHO
of the model.
.H\ZRUGV²'HHSIDNH'HHS/HDUQLQJ0DFKLQH/HDUQLQJ)DFH
Keywords- Deepfake, Deep Learning, Machine Learning, Face >- 5HGXFH
¾ Reduce WUDLQLQJ
training WLPH
time DQG
and DFKLHYH
achieve WKH
the VDPH
same OHYHO
level RI
of
0DQLSXODWLRQ
Manipukltion. SHUIRUPDQFH
performance. 8VLQJ
Using )DFH)RUHFLFV')'&
FaceForecics++, DFDC, &HOHE Celeb
')DQG9')'GDWDVHWVZLWKWKHVDPHDPRXQWRIWUDLQ
DF, and VDFD datasets with the same amount of train
,
I. ,INTRODUCTION
1752'8&7,21 DQG
and WHVW
test VDPSOHV
samples, WKH ML PHWKRGV
the 0/ methods WDNH
take EHWZHHQ
between
'HHSIDNH
Deepfake PDQLSXODWLRQ
manipulation LQYROYHV
involves VZDSSLQJ
swapping DQ an LQGLYLGXDO¶V
individual's VHYHUDOVHFRQGVDQGDFRXSOHRIKRXUVZKLOHWKHPRVW
several seconds and a couple of hours, while the most
IDFH
face DQG
and IDFLDO
facial H[SUHVVLRQV
expressions ZLWK
with RWKHUV¶
others' DQG
and FUHDWLQJ
creating DGYDQFHG
advanced '/
DL DOJRULWKP
algorithm, IRULQVWDQFH
for instance, 5HV1HW
ResNet, WDNHV
takes
SKRWRUHDOLVWLFIDNHYLGHRVFDOOHG'HHSIDNHV5HFHQWDGYDQFHVLQ
photorealistic fake videos called Deep fakes. Recent advances in QHDUO\WZRZHHNV
nearly two weeks.
YLGHR
video HGLWLQJ
editing WRROV
tools, VXFK
such DV
as )DFH$SS
FaceApp >@
[ 1 ] RU
or )DNH$SS
FakeApp >@
[2],
intelligence (AI),
7KHUHVWRIWKHSDSHULVRUJDQL]HGDVIROORZV6HFWJLYHVD
The rest of the paper is organized as follows : Sect. 2 gives a
DUWLILFLDO
artificial LQWHOOLJHQFH $, DQG
and VRFLDO
social QHWZRUNLQJ
networking, PDNHmake LW
it
OLWHUDWXUHUHYLHZ6HFWGHVFULEHVRXUSURSRVHGPHWKRG6HFW
literature review; Sect. 3 describes our proposed method; Sect.
SRVVLEOHWRSURGXFH'HHSIDNHYLGHRVHDVLO\7KHVHFRXQWHUIHLW
possible to produce Deepfake videos easily. These counterfeit
SUHVHQWVUHVXOWVDQGGLVFXVVLRQV6HFWGUDZVFRQFOXVLRQVDQG
4 presents results and discussions; Sect. 5 draws conclusions and
YLGHRVFDQEHXVHGIRUYDULRXVPDOLFLRXVSXUSRVHVWKHGHWHFWLRQ
videos can be used for various malicious purposes; the detection
RXWOLQHVSUREDEOHIXWXUHZRUN
outlines probable future work.
DQG
and FRXQWHUPHDVXUHV
countermeasures RI of 'HHSIDNHV
Deepfakes KDYH
have WKXV
thus EHFRPH
become D a
IXQGDPHQWDODQGFKDOOHQJLQJSUREOHP
fundamental and challenging problem. ,,
II. / ,7(5$785(5
LITERATURE (9,(:
REVIEW
'HHSOHDUQLQJ '/ EDVHGDSSURDFKHVPLQLPL]HWKHKXPDQ
Deep learning (DL)-based approaches minimize the human *XHUD
Guera DQG
and 'HOS
Delp >@
[4] SURSRVHG
proposed WZR
two HQGWRHQG
end-to-end WUDLQDEOH
trainable
HIIRUW
effort LQ
in IHDWXUH
feature HQJLQHHULQJ
engineering; KRZHYHU
however, DVas PXFK
much LVis GRQH
done UHFXUUHQWV\VWHPVD&11WKDWH[WUDFWVWKHPRVWFULWLFDOIHDWXUHV
recurrent systems: a CNN that extracts the most critical features
DXWRPDWLFDOO\LWLQFUHDVHVWKHFRPSOH[LW\RIXQGHUVWDQGLQJDQG
automatically, it increases the complexity of understanding and DQGDQ/670IRUVHTXHQWLDODQDO\VLV,Q>@IRXUGLVWLQFW&11V
and an LSTM for sequential analysis. In [5], four distinct CNNs
LQWHUSUHWLQJ
interpreting WKH
the PRGHO
model. ,QWHUSUHWDELOLW\
Interpretability LV
is D
a VLJQLILFDQW
significant ZHUH
were WUDLQHG
trained XVLQJ
using UHDO
real DQG
and IDNH
fake IDFHV
faces. ,Q
In >@
[6] , WKH
the DXWKRUV
authors
GLVDGYDQWDJHRI'/EDVHGPHWKRGVDVWKHPRGHOLVWRRGLIILFXOW
disadvantage of DL-based methods as the model is too difficult LQWURGXFHGDQRWKHUPHWKRGEDVHGRQH\HEOLQNLQJGHWHFWLRQLQ
introduced another method based on eye-blinking detection, in
WRXQGHUVWDQGZLWKLWVKLJKQRQOLQHDULW\DQGLQWHUDFWLRQVEHWZHHQ
to understand with its high nonlinearity and interactions between ZKLFK
which DXWKRUV
authors EHOLHYH
believe WKDW
that H\HEOLQNLQJ
eye-blinking LV
is GLVDSSHDUHG
disappeared LQin
In
,Q ML,
0/ feature
IHDWXUH extraction
H[WUDFWLRQ and DQG feature
IHDWXUH selection
VHOHFWLRQ are
DUH 7$%/(,,
TABLE II. Q 4-TEST
7(67$1' )7(67
AND F-IEST
significant
VLJQLILFDQW problems.
SUREOHPV Accurate
$FFXUDWH or RU discriminating
GLVFULPLQDWLQJ features
IHDWXUHV are
Q-test F-test
DUH
Feature
Dataset
Set
mandatory to enhance the model' s performance because adverse
Q-stat p-value F-stat p-value
PDQGDWRU\WRHQKDQFHWKHPRGHO¶VSHUIRUPDQFHEHFDXVHDGYHUVH )HDWXUH 4WHVW )WHVW
'DWDVHW
6HW
computation
FRPSXWDWLRQ and DQG accuracy loss may
DFFXUDF\ ORVV PD\ result
UHVXOW from
IURP irrelevant
LUUHOHYDQW 4VWDW SYDOXH )VWDW SYDOXH
features. A more comprehensive training dataset will usually get
IHDWXUHV$PRUHFRPSUHKHQVLYHWUDLQLQJGDWDVHWZLOOXVXDOO\JHW '))
DFF- 1 09 0.000002
9329.233 1
722.1539
0.000003
reasonable
UHDVRQDEOH classification
FODVVLILFDWLRQ accuracy
DFFXUDF\ as DV the
WKH number
QXPEHU of RI features
IHDWXUHV '))
DFF- 117
9703 .819
0.000002 1
799.0783
0.000003
aDYDULHW\RIIHDWXUHVDQGWKHDFFXUDF\RIFODVVLILFDWLRQWHQGVWR
variety of features, and the accuracy of classification tends to '))
DFF-228 0.000002
7273.790 1
3 1 1 .6142
0.000002
and
DQG pattern
SDWWHUQ recognition,
UHFRJQLWLRQ feature
IHDWXUH extraction
H[WUDFWLRQ involves
LQYROYHV obtaining
REWDLQLQJ '))
DFF-2296 0.000002
8290.169 1
5 1 2. 2 1 34
0.000003
color, shape, texture, etc. And feature selection reduces the input
FRORUVKDSHWH[WXUHHWF$QGIHDWXUHVHOHFWLRQUHGXFHVWKHLQSXW '))
DFF- 117 0.000002
2026.625
354. 6049
0.000003
459
uthorized licensed use limited to: CENTRE FOR DEVELOPMENT OF ADVANCED COMPUTING - CDAC - PUNE. Downloaded on July 09,2024 at [Link] UTC from IEEE Xplore. Restrictions apply
Feature Q-test F-test
Dataset
Data
'DWD Fl-
)
Set Q-stat p-value F-stat p-value
)HDWXUH 4WHVW )WHVW Feature
)HDWXUH Model
0RGHO Prec.
3UHF Rec.
5HF ACC
$&& A UC
$8&
'DWDVHW set
VHW score
VFRUH
6HW 4VWDW SYDOXH )VWDW SYDOXH
5)
RF
99.08
97.94
98.51
98.58
98.55
'))
DFF-1 17 0.000002
7647.576 1
3 85.5058
0.000003
(57
ERT
99.08
97.85
98.46
98.53
98.51
'))
DFF-1 33
71 84.748
0.000002 1
294. 8 3 1 5
0.000003
'7
DT
93. 11
92.59
92.85
93.16
93.14
'))
DFF-228 0.000002
7497.939 1
356.0855
0.000003
6*%
SGB
91.04
80.68
85.54
86.93
86.69
'))
DFF-71 7 0.000001
5430.538
959.62038
0.000001
MLP
0/3 95.79
97.00
96.39
96.53
96.54
'))
DFF-2296
6705 . 1 40
0.000001 1
201 . 8674
0.000002
.11
KNN
96.93
96.47
96.70
96.84
96.83
690
SVM
93.27
87.65
90.37
91.05
90.91
Based
%DVHGRQ on the results shown
WKHUHVXOWV VKRZQ in Table 2, the
LQ7DEOH p-value is
WKHSYDOXH LV less
OHVV 5)
RF
WKDQ aĮ (aĮ = 0.05)
99. 11 98.06 98.58 98.65 98.62
than that
WKDW leads
OHDGV to
WR rejecting
UHMHFWLQJ the
WKH null
QXOO hypothesis.
K\SRWKHVLV (57
ERT
99.17
98.44
98.80
98.86
98.84
Based
%DVHG onRQ the
WKH nested
QHVWHG cross-validation
FURVVYDOLGDWLRQ test,
WHVW Table
7DEOH 3 shows
VKRZV the
WKH '))
DFF-117 '7
DT
94.01
94. 15
94.08
94.32
94.13
outcomes of classical ML-based classifiers.
RXWFRPHVRIFODVVLFDO0/EDVHGFODVVLILHUV SGB
6*%
91.29 8
1 .74
86.25
87.51
87.28
0/3
MLP
97.01
97.29
97.15
97.27
97.26
TABLE 3(5)2501$&(2)&/$66,&$/
III. PERFORMNACE
7$%/(,,, 0/%$6('$/*25,7+06
OF CLASSICAL ML-BASED ALGORITHMS .11
KNN
96.88
96.68
96.78
96.91
96.90
SVM
690 93.02
84.32
88.46
89.46
89.25
'DWD
Data )
Fl-
)HDWXUH
Feature 0RGHO
Model 3UHF
Prec. 5HF
Rec. $&&
ACC $8&
AUC 5)
RF
99.08
98.32
98.70
98.76
98.74
VHW
set VFRUH
score
690
SVM
94.60
93.99
94.29
94.22
94.23 (57
ERT
99.35
98.59
98.97
99.01
98.99
5)
RF
99.28
99.04
99.16
99.15
99. 15 '))
DFF-133 '7
DT
94.35
94.29
94.32
94.56
94.55
(57
ERT
99.64
99.46
99.55
99.54
99.55 SGB
6*% 91.92
83.32
87.41
88.50
88.29
'))
DFF-109 '7
DT
92.57
92.32
92.44
92.33
92.34 0/3
MLP
97.89
98.41
98.15
98.22
98.23
SGB
6*% 8
1 .41 78.78
80.08
80.10
80.12
KNN
.11 97.20
96.82
97.01
97.14
97.13
')'
DFD
0/3
MLP
95.97
97.59
96.77
96.69
96.68 690
SVM
97.02
93.91
95.44
95.70
95.63
c
&
.11
KNN
98.42
98.82
98.62
98.59
98.59 RF
5)
99.35
98.71
99.03
99.07
99.06
690
SVM
94.35
92.76
93.55
93.51
93.52 (57
ERT
99.44
99.26
99.35
99.38
99.37
5)
RF
99.44
99.27
99.36
99.35
99.35 DFF-228
')) DT
'7 94.35
94.38
94.37
94.60
94.59
(57
ERT
99.58
99.50
99.54
99.52
99.53 6*%
SGB
93.97
85.68
89.63
90.50
90.31
'))
DFF-1 17 '7
DT
93.51
93.68
93.60
93.47
93.49 0/3
MLP
99.00
99.29
99.15
99.18
99.19
SGB
6*% 8
1 .96 78.17
80.02
80.19
80.22
.11
KNN
98.64
98.12
98.38
98.45
98.44
0/3
MLP
98.34
96.97
97.65
97.63
97.64 SVM
690
96.60
93.71
95.13
95.41
95.34
KNN
.11 98.41
98.82
98.61
98.59
98.58
RF
5)
98.99
97.76
98.37
98.45
98.42
SVM
690
93.02
91.20
92. 10
92.06
92.07 (57
ERT
99.07
97.53
98.30
98.38
98.35
RF
5)
99.49
99.32
99.41
99.40
99.39 DFF-7 17
')) DT
'7 94.57
93.79
94.18
94.45
94.42
(57
ERT
99.65
99.53
99.59
99.60
99.59 6*%
SGB
93.79
86.65
90.08
90.85
90.67
DFF-133
')) DT
'7 92.83
92.89
92.86
92.76
92.75
0/3
MLP
99.32
98.97
99.15
99.18
99.17
6*%
SGB
83.66
80.91
82.27
82.29
82.31 .11
KNN
96.58
96.21
96.39
96.55
96.53
0/3
MLP
98.83
98.08
98.45
98.44
98.44 690
SVM
99.15
99.38
99.27
99.30
99.29
FF+
)) .11
KNN
98.54
98.99
98.77
98.74
98.73 5)
RF
99.20
99.00
99.10
99.14
99.02
+
SVM
690 98.41
97.5
97.95
97.93
97.94
(57
ERT
99.23
98.94
99.09
99.15
99.14
5)
RF
99.79
99.53
99.66
99.67
99.66 '))
DFF-2296 '7
DT
94.63
94.88
94.76
94.97
95.26
(57
ERT
99.80
99.75
99.78
99.77
99.78 6*%
SGB
95.80
89.32
92.45
93.01
92.86
'))
DFF-228 '7
DT
93.22
93.68
93.45
93.34
93.33 0/3
MLP
99.44
99.76
99.60
99.62
99.19
6*%
SGB
87.34
87.67
87.51
87.30
87.29 KNN
.11 97.83
98.06
97.94
98.03
98.01
MLP
0/3 98.99
99.88
99.43
99.42
99.41
690
SVM
92.26
90.33
91.28
91.27
91.28
.11
KNN
99.63
99.68
99.66
99.65
99.64 RF
5)
98.95
98.19
98.57
98.56
98.56
690
SVM
97.86
97.30
97.58
97.55
97.55 (57
ERT
98.95
98.49
98.72
98.70
98.71
RF
5)
99.60
99.26
99.43
99.42
99.43 '))
DFF-109 '7
DT
92. 18
91.53
91.86
91.80
91.79
(57
ERT
99.52
99.32
99.42
99.41
99.41 SGB
6*% 8
1.54 77.43
79.43
79.72
79.75
'))
DFF-717 '7
DT
89.29
88.73
89.01
88.90
88.88 0/3
MLP
95.35
94.91
95.13
95.08
95.08
6*%
SGB
87.46
86.75
87.10
86.97
86.96 .11
KNN
96.39
95.85
96.12
96.09
96.10
0/3
MLP
99.56
99.22
99.39
99.38
99.38 690
SVM
91.79
88.82
90.28
90.33
90.35
Cele
&HOH
KNN
.11 96.75
97.43
97.09
97.04
97.02
5)
RF
99.10
98.59
98.85
98.84
98.83
E')
b-DF
690
SVM
99.42
99.52
99.47
99.46
99.47 (57
ERT
99.07
98.52
98.80
98.78
98.80
RF
5)
99.28
99.42
99.85
99.84
98.84 '))
DFF-117 '7
DT
92.95
93.34
93.14
93.05
93.04
(57
ERT
99.15
98.92
99.03
99.02
99.04 6*%
SGB
83.92
79.24 8
1.51 8
1 .42
81.85
'))
DFF-2296 '7
DT
83.32
84.26
84.04
83.76
83.75 MLP
0/3 97.01
96.94
96.97
96.94
96.94
6*%
SGB
87.60
90.36
88.96
88.61
88.58 .11
KNN
96.46
95.53
96.19
96.17
96.16
0/3
MLP
99.23
98.60
98.90
98.91
98.90 SVM
690 9
1.13
88.63
89.86
89.89
89.90
.11
KNN
95.92
96.23
96.08
96.01
96.00 '))
DFF-133 5)
RF
99.19
98.74
98.96
98.95
98.96
'))
DFF-109 690
SVM
93.25
88.59
90.86
91.46
91.34 (57
ERT
99.24
98.89
99.06
99.08
99.06
460
uthorized licensed use limited to: CENTRE FOR DEVELOPMENT OF ADVANCED COMPUTING - CDAC - PUNE. Downloaded on July 09,2024 at [Link] UTC from IEEE Xplore. Restrictions apply
'DWD
Data )
Fl- 'DWD
Data )
Fl-
)HDWXUH
Feature 0RGHO
Model 3UHF
Prec. 5HF
Rec. $&&
ACC $8&
A UC )HDWXUH
Feature 0RGHO
Model 3UHF
Prec. 5HF
Rec. $&&
ACC $8&
A UC
VHW
set VFRUH
score VHW
set VFRUH
score
'7
DT
93.35
92.85
93.10
93.04
93.05 0/3
MLP
99.72
99.57
99.66
99.65
99.64
6*%
SGB
83.99
79.82
81.85
82. 10
82. 13 .11
KNN
97.00
97.21
97.10
97.12
97.10
0/3
MLP
91.75
99.17
95.32
95.07
95.02 690
SVM
99.71
99.62
99.67
99.66
99.67
.11
KNN
96.52
95.98
96.25
96.22
96.22 5)
RF
99. 16
97.55
98.34
98.36
98.35
690
SVM
96.27
95.56
95.91
95.88
95.89 (57
ERT
98.90
98.68
98.79
98.79
98.79
5)
RF
99.69
99.18
99.43
99.43
99.43 '))
DFF-2296 '7
DT
86.44
84.82
85.62
85.79
85.78
(57
ERT
99.56
99.18
99.37
99.36
99.37 6*%
SGB
91.60
93.12
92.35
92.32
92.30
'))
DFF-228 '7
DT
93.79
93.40
93.59
93.53
93.54 0/3
MLP
99.46
99.63
99.55
99.55
99.54
6*%
SGB
86.44
83.00
84.68
84. 81
84.84 .11
KNN
95.96
96.29
96.12
96.13
96.12
0/3
MLP
99.23
99.27
99.25
99.24
99.23
.11
KNN
97.85
97.47
97.66
97.64
97.64
690
Our experiment used four different datasets, including FF++,
2XUH[SHULPHQWXVHGIRXUGLIIHUHQWGDWDVHWVLQFOXGLQJ))
SVM 95.98 95.79 95.89 95.84 95.85
5)
DFDC,
')'& Celeb-DF,
&HOHE') andDQG VDFD.
9')' In
,Q this
WKLV paper,
SDSHUweZH conducted
FRQGXFWHG aD
RF 99.06 98.56 98.81 98.82 98.80
series of experiments for six feature sets, for example, DFF-1 09,
VHULHVRIH[SHULPHQWVIRUVL[IHDWXUHVHWVIRUH[DPSOH'))
(57
ERT
98.88
98.07
98.48
98.46
98.47
'))
DFF-1 ')) DFF-228,
1 7, DFF-133, ')) DFF-71
')) 7, and
DQG DFF-2296.
'))
on the experiments using FF++, we see that RF and ERT
'))
DFF-717 '7
DT
92.74
92.10
92.42
92.36
92.35
Based
%DVHGRQWKHH[SHULPHQWVXVLQJ))ZHVHHWKDW5)DQG(57
6*%
SGB
87.16
84.74
85.93
85.97
85.98
achieved
DFKLHYHG the WKH best
EHVW performances
SHUIRUPDQFHV for IRU all
DOO the
WKH feature sets and
IHDWXUH VHWV DQG
0/3
MLP
99.34
98.64
98.99
98.98
98.98
obtained
REWDLQHG morePRUH than
WKDQ 99% accuracy,
DFFXUDF\ while
ZKLOH the SVM only
WKH 690 RQO\
.11
KNN
94.62
94.57
94.60
94.54
94.53
achieved
DFKLHYHG theWKH same
VDPHperformance
SHUIRUPDQFH for IRU the DFF-2296 feature
WKH')) IHDWXUH set.
VHW
690
SVM
98. 15
97.81
97.98
97.96
97.96
MLP
0/3 and
DQG KNN.11 achieved
DFKLHYHG 98% accuracy,
DFFXUDF\ where SGB and
ZKHUH 6*% DQG DT
'7
5)
RF
98.36
97.47
97.86
97.85
97.85
achieved about 95% accuracy. For the DFDC dataset, using the
DFKLHYHGDERXWDFFXUDF\)RUWKH')'&GDWDVHWXVLQJWKH
(57
ERT
97.91
97.29
97.60
97.58
97.58
DFF-2296
')) feature
IHDWXUH set, 690 RF,
VHW SVM, 5) andDQG ERT
(57 achieved
DFKLHYHG 99%
'))
DFF-2296 '7
DT
89. 18
89.14
89.16
89.07
89.04
accuracy,
DFFXUDF\ where
ZKHUH MLP
0/3 achieved
DFKLHYHG the
WKH best
EHVW performance
SHUIRUPDQFH for IRU all
DOO
6*%
SGB
86.76
86.21
86.48
86.37
86.37
feature
IHDWXUH sets.
VHWV Also,
$OVR it
LW is
LV noticeable
QRWLFHDEOH that
WKDW the
WKH SVM
690 decreased
GHFUHDVHG its LWV
0/3
MLP
98.85
98.69
98.77
98.78
98.76
performance in detecting Deepfake while decreasing the size of
SHUIRUPDQFHLQGHWHFWLQJ'HHSIDNHZKLOHGHFUHDVLQJWKHVL]HRI
.11
KNN
93.92
93.76
93.84
97.79
93.77
the feature set, for example, its accuracy decreased to 89% for
WKHIHDWXUHVHWIRUH[DPSOHLWVDFFXUDF\GHFUHDVHGWRIRU
690
SVM
93.63
94.13
93.88
93.87
93.87
DFF- 1 3 3 and 9 1 % for DFF-1 1 7. For the same feature sets, the
'))DQGIRU')))RUWKHVDPHIHDWXUHVHWVWKH
5)
RF 98.91 98.65 98.78 98.79 98.79
SVM again produced the same level performance for Celeb-DF
690DJDLQSURGXFHGWKHVDPHOHYHOSHUIRUPDQFHIRU&HOHE')
(57
ERT 99.22 99.23 99.23 99.23 99.22
Dataset.
'DWDVHW In,Q VDFD
9')' dataset-based
GDWDVHWEDVHG experiments,
H[SHULPHQWV it LW is
LV seen
VHHQ that
WKDW
'))
DFF-109 '7
DT
91.62
91.14
91.38
91.40
91.41
MLP
0/3 obtained
REWDLQHG better
EHWWHU performance
SHUIRUPDQFH thanWKDQ RF
5) and
DQG ERT
(57 forIRU any
DQ\
6*%
SGB
80.29
84.80
82.48
82.01
82. 16
feature set and obtained more than 99% accuracy. Also, it was
IHDWXUHVHWDQGREWDLQHGPRUHWKDQDFFXUDF\$OVRLWZDV
0/3
MLP
95.88
96.71
96.30
96.29
96.27
noticed that using VDFD, the KNN improved its performance
QRWLFHGWKDWXVLQJ9')'WKH.11LPSURYHGLWVSHUIRUPDQFH
·w.:- 0-
kU.:- 0
· .:-
.11
KNN
98.27
98.70
98.49
98.51
98.49
for a certain feature set, for example, DFF- 1 1 7 and DFF-228.
�a
IRUDFHUWDLQIHDWXUHVHWIRUH[DPSOH'))DQG'))
690
SVM
93.28
93.54
93.40
93.42
93.41
5)
RF
99.05
98.82
98.94
98.94
98.93
(57 Accuracy (DFF-100) AccUI'.cy (DH-117)
·�:.
ERT 99.3 1 99.30 99.28 99.31 99.3 1 AccurK>J {DFF-lB)
'))
DFF-1 17 '7
DT
92.71
91.87
92.29
92.33
92.34 .. ·.�. ..
�-
�· .M -�
-:.,
• .... ...,. - IlLII ..,.
...
. ... u
�::::
6*%
SGB 8
1 .07
85.46
83.21
82.78
82.79 01 .., , I
tf.t
• If J
• "U ,..,_ • • ... Ulol • 0
'!. .
.u:..
..
�. l!' .
�;:
0/3
MLP
98.59
96.63
97.60
97.63
97.62 ..
.. 0
.
. .
.
KNN
.11
98.42
98.83
98.62
98.63
98.62
.. ..
''
:;�
- ...... ,.,.,. , u _
690
SVM
92.33
92.03
92.18
92.20
92.21
'loGe 01 Ull
' 141
5)
RF
99.23
99.03
99.13
99.13
99.13
9')
VDF ou
(57
ERT
99.40
99.38
99.37
99.38
99.39 ._,,....
'
D
'))
DFF-133 '7
DT
92.53
91.72
92.12
92.17
92.16
a} OFF-109 b) DFF-117 C) DFF-133
6*%
SGB 8
1 .89
85.92
83.85
83.48
83.49
0/3
MLP
99.54
95.44
97.45
97.52
97.50
Awlncy (Dff-Zll) Accuracy {Dff-717) AccurKy {Dff-2l96J
.11
KNN
98.39
98.73
98.56
98.58
98.56
690
SVM
97.40
96.94
97.17
97. 18
97.17
5)
RF
99.36
99.33
99.35
99.36
99.35
(57
ERT
99.53
99.62
99.58
99.58
99.58
'))
DFF-228 '7
DT
93.95
92.98
93.46
93. 51
93.50
6*%
SGB
84.23
88.41
86.27
85.95
85.95 ::: IHi
-
tull ...
0/3
MLP
99.66
99.37
99.51
99.52
99.52
KNN
.11
99.18
99.21
99.19
99.20
99.20
e) DFF·717 f) DFF-2296
690
SVM
98.66
98.40
98.53
98.54
98.53 d) DFF-228
5)
RF
99.21
98.55
98.88
98.89
98.89
'))
DFF-717 (57
ERT
99.32
99.27
99.29
99.31
99.30 )LJ
Fig. $FFXUDF\RI0/FODVVLILHUVXVLQJYDULRXVIHDWXUHVHWV
1. Accuracy ofl\1L classifiers using various feature sets.
'7
DT
91.21
90.43
90.82
90.89
90.87
6*%
SGB
87.89
90.76
89.30
89. 16
89. 14
461
461
uthorized licensed use limited to: CENTRE FOR DEVELOPMENT OF ADVANCED COMPUTING - CDAC - PUNE. Downloaded on July 09,2024 at [Link] UTC from IEEE Xplore. Restrictions apply
Based
%DVHG onRQ the
WKH overall
RYHUDOO experiments
H[SHULPHQWV usingXVLQJ FF++,
)) VDFD,
9')'
Celeb-DF,
&HOHE')DFDC datasets, itLWLV
')'&GDWDVHWV is seen that RF,
VHHQWKDW 5)ERT, and MLP
(57DQG 0/3
: h/'
obtain
REWDLQ state-of-the-art
VWDWHRIWKHDUW pertonnances
SHUIRUPDQFHV by E\ achieving
DFKLHYLQJ more
PRUH than
/
WKDQ
,..
99% accuracy. Besides, DFF- 1 3 3 and DFF-228 provide the best
l... /
l
DFFXUDF\%HVLGHV'))DQG'))SURYLGHWKHEHVW !"
/
./
accuracy. Based on the results presented above, the classical ML
DFFXUDF\%DVHGRQWKHUHVXOWVSUHVHQWHGDERYHWKHFODVVLFDO0/ , ..
.. - :::::
In summary, each model generates a ROC curve that shows
,QVXPPDU\HDFKPRGHOJHQHUDWHVD52&FXUYHWKDWVKRZV .. ., .. .. .. ,.
us
XVhow
KRZgood the model
JRRGWKH PRGHOisLVtor
IRUclassifying
FODVVLI\LQJthe
WKHoriginal and the
RULJLQDODQG WKH
Deep fake classes, as seen in Figures 2-5. The curve fills the area
'HHSIDNHFODVVHVDVVHHQLQ)LJXUHV7KHFXUYHILOOVWKHDUHD DFF-228
G '))
d) DFF-717
H '))
e) f)I '))
DFF-2296
between
EHWZHHQ the WKH colored
FRORUHG line
OLQH and
DQG the
WKH x-axis.
[D[LV A$ single
VLQJOH
model/classifier is specified as each color line. The bigger the
PRGHOFODVVLILHULVVSHFLILHGDVHDFKFRORUOLQH7KHELJJHUWKH )LJ
Fig. $8&RI0/FODVVLILHUVXVLQJDOOIHDWXUHVHWVRQWKH')'&GDWDVHW
3. AUC ofML classifiers using all feature sets on the DFDC dataset.
size covered, the better the models are at classifying the given
VL]HFRYHUHGWKHEHWWHUWKHPRGHOVDUHDWFODVVLI\LQJWKHJLYHQ
classes. In other words, the closer the AUC is to 1 .0, the better.
FODVVHV,QRWKHUZRUGVWKHFORVHUWKH$8&LVWRWKHEHWWHU
Based
%DVHGon RQthe
WKHexperiments,
H[SHULPHQWVwe ZHcan
FDQ see
VHHthat
WKDWERT
(57achieves
DFKLHYHVan
DQ
·v:
AUC of 1 .0 using any feature sets of the FF++ dataset, but using
$8&RIXVLQJDQ\IHDWXUHVHWVRIWKH))GDWDVHWEXWXVLQJ
other datasets, it achieves 0.99. MLP obtains an AUC of0.99 tor
RWKHUGDWDVHWVLWDFKLHYHV0/3REWDLQVDQ$8&RIIRU
any dataset with any feature set. Overall, ERT, RF bring an AUC ,:,.. /
,
//
/
DQ\GDWDVHWZLWKDQ\IHDWXUHVHW2YHUDOO(575)EULQJDQ$8&
of
RI0.99
torIRUany
DQ\dataset
GDWDVHWwith
ZLWKany
DQ\feature
IHDWXUH set.
VHWThe
7KHAUC
$8&value
YDOXH u
--
- - ·-
---·-
varies tor KNN and SVM tor various feature sets. Based on the
YDULHVIRU.11DQG690IRUYDULRXVIHDWXUHVHWV%DVHGRQWKH " - ==-�:
110'-·Ull
DFF-109
D '))
a) DFF-1 1 7
E '))
b) DFF- 133
F '))
c)
d) DFF-228
G ')) DFF-717
H '))
e) DFF-2296
I '))
f)
)LJ
Fig. ofM L classifiers using all feature set� on Celeh-DF dataset.
$8&RI0/FODVVLILHUVXVLQJDOOIHDWXUHVHWVRQ&HOHE')GDWDVHW
4. AlJC
DFF-228
G '))
d)
)LJ
Fig. 2. AUC
DFF-717
H '))
e)
$8&RI0/FODVVLILHUVXVLQJDOOIHDWXUHVHWVRQWKH))GDWDVHW
j:JJ
·�
.
/ � ;;E
-
--- --·-:·
;::;::,.
:
a) DFF-109
D ')) DFF-117
bE) ')) DFF-133
F '))
c)
:v:7
. ..
i: r
..
//
.
- " // � �=::::
" m
====�.:
.;::
.. ==:�:
DFF-109
D '))
a)
.. v:: ... u
DFF-117
E '))
b)
_ ___
•• I.e
DFF-133
F '))
c)
d) DFF-228
G ')) DFF-717
H '))
e) DFF-2296
f)I '))
)LJ
Fig. $8&RI0/FODVVLILHUVXVLQJDOOIHDWXUHVHWVRQWKH9')'GDWDVHW
5 . AUC of ML classifiers using all feature sets on the VDFD dataset.
462
462
uthorized licensed use limited to: CENTRE FOR DEVELOPMENT OF ADVANCED COMPUTING - CDAC - PUNE. Downloaded on July 09,2024 at [Link] UTC from IEEE Xplore. Restrictions apply
V.
9 CONCLUSIONS
&21&/86,216$1' )8785(WORKS
AND FUTURE :25.6 >@
[1 '&R]]ROLQR-7KLHV$5|VVOHU&5LHVV01LHQHUDQG/9HUGROLYD
0] D. Cozzolino, J. Thies, A. Rossler, C. Riess, M. NieBner and L. Verdoliva,
³)RUHQVLF7UDQVIHU Weakly-supervised
"ForensicTransfer: :HDNO\VXSHUYLVHG domain
GRPDLQ adaptation
DGDSWDWLRQ forIRU forgery
IRUJHU\
While
:KLOH modem
PRGHUQ deepGHHS learning
OHDUQLQJ (DL)
'/ based
EDVHG methods
PHWKRGV have
KDYH GHWHFWLRQ´DU;LY
detection," arXiv: l 8 12.02510, 2018.
achieved
DFKLHYHG highly
KLJKO\ accurate
DFFXUDWH results
UHVXOWV in
LQ detecting
GHWHFWLQJ Deepfakes,
'HHSIDNHV they
WKH\ >@ H.
[11] ++1JX\HQ))DQJ-<DPDJLVKLDQG,(FKL]HQ³0XOWLWDVN/HDUQLQJ
H. Ngnyen, F. Fang, J. Yamagishi and I. Echizen, "Multi-task Learning
carry
FDUU\ with
ZLWK them
WKHP disadvantages
GLVDGYDQWDJHV that WKDW include
LQFOXGH understandability,
XQGHUVWDQGDELOLW\ IRU'HWHFWLQJDQG6HJPHQWLQJ0DQLSXODWHG)DFLDO,PDJHVDQG9LGHRV´
for Detecting and Segmenting Manipulated Facial Images and Videos,"
interpretability, data complexity, and high computational cost.
LQWHUSUHWDELOLW\GDWDFRPSOH[LW\DQGKLJKFRPSXWDWLRQDOFRVW DU;LY
arXiv : l 906.06876, 2019.
The
7KH DL
'/ model
PRGHO is LV highly
KLJKO\ complicated
FRPSOLFDWHG and DQG cannot
FDQQRW easily
HDVLO\ be
EH >@ H.
[12] ++1JX\HQ-<DPDJLVKLDQG,
H. Nguyen, J. Yamagishi and I. Echizen, (FKL]HQ³&DSVXOHIRUHQVLFV8VLQJ
"Capsule-forensics: Using
interpreted [3] . Further,
LQWHUSUHWHG >@ )XUWKHUthey are precise
WKH\DUH to their context
SUHFLVHWRWKHLU FRQWH[W and,
DQG &DSVXOH Networks
Capsule 1HWZRUNV to WR Detect
'HWHFW Forged
)RUJHG Images
,PDJHV and
DQG Videos,"
9LGHRV´ 2019
IEEE
,(((
,QWHUQDWLRQDO Conference
&RQIHUHQFH on RQ Acoustics,
$FRXVWLFV Speech
6SHHFK and
DQG Signal
6LJQDO Processing
3URFHVVLQJ
being very deep, tend to extract the underlying semantics from
EHLQJYHU\GHHSWHQGWRH[WUDFWWKHXQGHUO\LQJVHPDQWLFVIURP International
,&$663 %ULJKWRQ8QLWHG.LQJGRPSS±
(ICASSP), Brighton, United Kingdom, 2019, pp. 2307-23 1 1 .
the images without having a single fingerprint [28] . In contrast,
WKHLPDJHVZLWKRXWKDYLQJDVLQJOHILQJHUSULQW>@,QFRQWUDVW
>@ U
[13] 8$&LIWFL,'HPLUDQG/<LQ³)DNH&DWFKHU'HWHFWLRQRI6\QWKHWLF
. A. Ciftci, I. Demir and L . Yin, "FakeCatcher: Detection of Synthetic
traditional
WUDGLWLRQDO machine
PDFKLQH learning
OHDUQLQJ (ML) 0/ methods
PHWKRGV allow
DOORZ better
EHWWHU 3RUWUDLW9LGHRVXVLQJ%LRORJLFDO6LJQDOV´,(((7UDQVDFWLRQVRQ3DWWHUQ
Portrait Videos using Biological Signals," IEEE Transactions on Pattern
interpretability; for example, tree-based ML methods illustrate
LQWHUSUHWDELOLW\IRUH[DPSOHWUHHEDVHG0/PHWKRGVLOOXVWUDWH $QDO\VLVDQG0DFKLQH,QWHOOLJHQFH
Analysis and Machine Intelligence.
the
WKH detection
GHWHFWLRQ process
SURFHVV inLQ the
WKH form
IRUP ofRI aD decision
GHFLVLRQ tree.
WUHH For
)RU an
DQ >@ F.
[14] )0DWHUQ&5LHVVDQG06WDPPLQJHU³([SORLWLQJYLVXDODUWLIDFWVWR
Matern, C. Riess and M. Stamminger, "Exploiting visual artifacts to
objective
REMHFWLYH evaluation,
HYDOXDWLRQ therefore,
WKHUHIRUH we ZH propose
SURSRVH in LQ this
WKLV paper
SDSHU aD H[SRVH deepfakes
expose GHHSIDNHV and DQG face
IDFH manipulations,"
PDQLSXODWLRQV´ 2019 IEEE
,((( Winter
:LQWHU
classical
FODVVLFDO ML-based
0/EDVHG method
PHWKRG to WR detect
GHWHFW Deepfakes,
'HHSIDNHV using
XVLQJ the
WKH $SSOLFDWLRQVRI&RPSXWHU9LVLRQ:RUNVKRSV
Applications of Computer Vision Workshops (WACVW), :$&9: SS±
2019, pp. 83-
standard
VWDQGDUG method
PHWKRG of RI feature
IHDWXUH development,
GHYHORSPHQW extraction,
H[WUDFWLRQ andDQG
92.
classifier training and testing. The experimental outcomes prove
FODVVLILHUWUDLQLQJDQGWHVWLQJ7KHH[SHULPHQWDORXWFRPHVSURYH >@ M.
[15] 065DQDDQG$+6XQJ³'HHSIDNH6WDFN$'HHS(QVHPEOHEDVHG
S. Rana and A. H. Sung, "DeepfakeStack: A Deep Ensemble-based
Learning Technique for Deepfake Detection," 2020 7thWK,(((,QWHUQDWLRQDO
/HDUQLQJ7HFKQLTXHIRU'HHSIDNH'HWHFWLRQ´ IEEE International
that
WKDW the
WKH ML
0/ techniques
WHFKQLTXHV aloneDORQH can
FDQ obtain
REWDLQ state-of-the-art
VWDWHRIWKHDUW &RQIHUHQFH on RQ Cyber
&\EHU Security
6HFXULW\ and
DQG Cloud
&ORXG &RPSXWLQJ &6&ORXG New 1HZ
Conference Computing (CSCloud),
performance
SHUIRUPDQFH in LQ detecting
GHWHFWLQJ Deepfakes.
'HHSIDNHV We :H believe
EHOLHYH that
WKDW the
WKH <RUN86$SS±
York, USA, 2020, pp. 70-75.
proposed
SURSRVHG method
PHWKRG can FDQ lay
OD\ aD basis
EDVLV for
IRU developing
GHYHORSLQJ an DQ effective
HIIHFWLYH >@ A.
[16] $ Rossler,
5|VVOHU D. ' Cozzolino,
&R]]ROLQR L. / Verdoliva,
9HUGROLYD C. & Riess,
5LHVV J.
- Thies
7KLHV and
DQG M.
0
solution for detecting Deep fakes and other facial manipulations.
VROXWLRQIRUGHWHFWLQJ'HHSIDNHVDQGRWKHUIDFLDOPDQLSXODWLRQV 1LHVVQHU "FaceForensics++:
Niessner, ³)DFH)RUHQVLFV Learning
/HDUQLQJ to WR Detect
'HWHFW Manipulated
0DQLSXODWHG Facial
)DFLDO
Our
2XU ongoing
RQJRLQJ workZRUN includes
LQFOXGHV further
IXUWKHU research
UHVHDUFK onRQ feature
IHDWXUH ,PDJHV´,(((&9),QWHUQDWLRQDO&RQIHUHQFHRQ&RPSXWHU9LVLRQ
Images," 2019 IEEE/CVF International Conference on Computer Vision
development and selection and ensemble detection for enhanced
GHYHORSPHQWDQGVHOHFWLRQDQGHQVHPEOHGHWHFWLRQIRUHQKDQFHG ,&&9 6HRXO.RUHD
(ICCV), 6RXWK SS±
Seoul, Korea (South), 2019, pp. 1-1 1 .
performance.
SHUIRUPDQFH >@ B
[17] %'ROKDQVN\-%LWWRQ%3IODXP-/X5+RZHV0:DQJDQG&&
. Dolhansky, J. Bitton, B . Pflaum, J. Lu, R . Howes, M. Wang, and C . C.
)HUUHU "The
Ferrer, ³7KH DeepFake
'HHS)DNH Detection
'HWHFWLRQ Challenge
&KDOOHQJH (DFDC)
')'& Dataset,"
'DWDVHW´ arXiv:
DU;LY
5()(5(1&(6 Y
2006.07397v4, 2020.
REFERENCES
>@ Y.
[18] <Li,
/L X.
; Yang,
<DQJP. 3 Sun,
6XQH.
+ Qi
4Land
DQG S.
6 Lyu,
/\X "Celeb-DF:
³&HOHE') A $ Large-Scale
/DUJH6FDOH
&KDOOHQJLQJ Dataset
Challenging 'DWDVHW forIRU DeepFake
'HHS)DNH Forensics,"
)RUHQVLFV´ 2020 IEEE/CVF
,(((&9)
>@ )DFH$SSKWWSVZZZIDFHDSSFRPODVWDFFHVVHG
[1] FaceApp, [Link] com/, last accessed: 2021/6/4. &RQIHUHQFHRQ&RPSXWHU9LVLRQDQG3DWWHUQ5HFRJQLWLRQ
Conference on Computer Vision and Pattern Recognition (CVPR), &935
2020,
>@ )DNH$SSKWWSVZZZIDNHDSSRUJODVWDFFHVVHG
[2] FakeApp, https ://[Link]. org/ last accessed: 2021/6/4. SS±'2,&935
pp. 3204-3213, DOl: 1 0 . 1 109/CVPR42600.2020.00327.
>@ 1 Koleva,
[3] N. .ROHYD "When
³:KHQ and DQG When
:KHQ Not 1RW toWR Use
8VH Deep
'HHS Learning,"
/HDUQLQJ´ >@ 010XUWLDQG96'HYL³)HDWXUH([WUDFWLRQDQG)HDWXUH6HOHFWLRQ
[19] M. N. Murti and V. S. Devi, "Feature Extraction and Feature Selection,
KWWSVELWO\LVR=GGODVWDFFHVVHG
[Link] last accessed: 2021/6/4. ,QWURGXFWLRQWR3DWWHUQ5HFRJQLWLRQDQG0DFKLQH/HDUQLQJ´,,6F/HFWXUH
Introduction to Pattern Recognition and Machine Learning," liSe Lecture
1RWHV6HULHV-XQHSS±
Notes Series, June 201 5, pp. 75-1 10.
>@ ' Giiera
[4] D. *HUD and
DQG E.
( J.
- Delp,
'HOS "Deepfake
³'HHSIDNH Video
9LGHRDetection
'HWHFWLRQ Using
8VLQJ Recurrent
5HFXUUHQW
1HXUDO Networks,"
Neural 1HWZRUNV´ 201 8 15thWK IEEE
,((( International
,QWHUQDWLRQDO Conference
&RQIHUHQFH onRQ >@ R.
[20] 5 M.0 Haralick,
+DUDOLFN "Statistical
³6WDWLVWLFDO and
DQG structural
VWUXFWXUDO approaches
DSSURDFKHV to WR texture,"
WH[WXUH´
$GYDQFHG9LGHRDQG6LJQDO%DVHG6XUYHLOODQFH
Advanced Video and Signal Based Surveillance (A $966
VSS),$XFNODQG1HZ
Auckland, New 3URFHHGLQJVRIWKH,(((9RO1RSS±
Proceedings of the IEEE, Vol. 67, No. 5, pp. 786-804, 1 979.
=HDODQGSS±
Zealand, 2018, pp. 1-6. >@ /:HQJ³2EMHFW'HWHFWLRQIRU'XPPLHV3DUW*UDGLHQW9HFWRU+2*
[21] L. Weng, "Object Detection for Dummies Part 1 : Gradient Vector, HOG,
>@ < Li
[5] Y. /L and
DQG S.6 Lyu,
/\X "Exposing
³([SRVLQJ DeepFake
'HHS)DNH Videos
9LGHRV byE\ Detecting
'HWHFWLQJ Face
)DFH DQG66´KWWSVELWO\R5)W[-ODVWDFFHVVHG
and SS", [Link] last accessed: 2021/6/4.
:DUSLQJ Artifacts,"
Warping $UWLIDFWV´Proceedings
3URFHHGLQJV of RIWKH ,((( Conference
the IEEE &RQIHUHQFH onRQ&RPSXWHU
Computer >@ R.
[22] 5 Ahmed,
$KPHG "A ³$ Take 7DNH on RQ HOG +2* Feature
)HDWXUH Descriptor,"
'HVFULSWRU´
9LVLRQDQG3DWWHUQ5HFRJQLWLRQ:RUNVKRSVSS±
Vision and Pattern Recognition Workshops, pp. 46-52. KWWSVELWO\/XW0\8ODVWDFFHVVHG
htt ps://[Link]/2LutMyU, last accessed: 2021/6/4.
>@ Y.
[6] </L0&KDQJDQG6/\X³,Q,FWX2FXOL([SRVLQJ$,&UHDWHG)DNH
Li, M. Chang and S. Lyu, "In Ictu Oculi: Exposing AI Created Fake >@ A.
[23] $6LQJK³)HDWXUH(QJLQHHULQJIRU,PDJHV$9DOXDEOH,QWURGXFWLRQWR
Singh, "Feature Engineering for Images: A Valuable Introduction to
9LGHRVE\'HWHFWLQJ(\H%OLQNLQJ´,(((,QWHUQDWLRQDO:RUNVKRS
Videos by Detecting Eye Blinking," 2018 IEEE International Workshop WKH HOG
the +2* Feature
)HDWXUH Descriptor,"
'HVFULSWRU´ htt
KWWSVELWO\/WM9W
ps://[Link]/2LtjVt9, last ODVW accessed:
DFFHVVHG
RQ,QIRUPDWLRQ)RUHQVLFVDQG6HFXULW\
on Information Forensics and Security (WIFS), :,)6 +RQJ.RQJ+RQJ.RQJ
Hong Kong, Hong Kong,
2021/6/4.
SS±
2018, pp. 1-7. >@ F.
) Alamdar
$ODPGDUand DQGM.0R.5Keyvanpour,
.H\YDQSRXU "A ³$New
1HZColor
&RORU Feature
)HDWXUH Extraction
([WUDFWLRQ
[24]
>@ '$IFKDU91R]LFN-<DPDJLVKLDQG,(FKL]HQ³0HVR1HWD&RPSDFW
[7] D. Afchar, V. Nozick, J. Yamagishi and I. Echizen, "MesoNet: a Compact 0HWKRG Based
Method %DVHG on RQ QuadHistogram,"
4XDG+LVWRJUDP´ Procedia
3URFHGLD Environmental
(QYLURQPHQWDO Sciences,
6FLHQFHV
)DFLDO Video
Facial 9LGHR Forgery
)RUJHU\ Detection
'HWHFWLRQ Network,"
1HWZRUN´ 201
,((( International
8 IEEE ,QWHUQDWLRQDO 9ROXPHSS±
Volume 1 0, 201 1 , pp. 777-783.
:RUNVKRSRQ,QIRUPDWLRQ)RUHQVLFVDQG6HFXULW\
Workshop on Information Forensics and Security (WIFS), :,)6 +RQJ.RQJ
Hong Kong, >@ J.
-äXQLü.+LURWDDQG3/5RVLQ³$+XPRPHQWLQYDULDQWDVDVKDSH
[25] Zuni6, K. Hirota and P. L. Rosin, "A Hu moment invariant as a shape
SS±
2018, pp. 1-7. FLUFXODULW\PHDVXUH3DWWHUQ5HFRJQLWLRQ´9ROXPH,VVXHSS
circularity measure, Pattern Recognition," Volume 43, Issue 1, 2010, pp.
>@ 3Zhou,
[8] P. =KRXX.;Han,
+DQ V.
9 I., Morariu
0RUDULX andDQGL.
/ S.
6Davis,
'DYLV "Two-Stream
³7ZR6WUHDP Neural
1HXUDO ±
47-57.
1HWZRUNV for
Networks IRU Tampered
7DPSHUHG Face )DFH Detection,"
'HWHFWLRQ´ 2017
IEEE
,((( Conference
&RQIHUHQFH on
RQ >@ Z.
=Huang
+XDQJand DQG J.
-Leng,
/HQJ "Analysis
³$QDO\VLV ofRI Hu's
+X¶Vmoment
PRPHQW invariants
LQYDULDQWV onRQimage
LPDJH
[26]
&RPSXWHU Vision
Computer 9LVLRQ and DQG Pattern
3DWWHUQ Recognition
5HFRJQLWLRQ Workshops
:RUNVKRSV (CVPRW),
&935: VFDOLQJand DQG rotation,"
URWDWLRQ´ 2010
2nd
QG International
,QWHUQDWLRQDO Conference
&RQIHUHQFH onRQ Computer
&RPSXWHU
scaling
+RQROXOX+,SS±
Honolulu, HI, 2017, pp. 1831-1839. (QJLQHHULQJDQG7HFKQRORJ\&KHQJGX
Engineering and Technology, Chengdu, 2010.
>@ E.
[9] ( Sabir,
6DELU J.
- Cheng,
&KHQJ A. $ Jaiswal,
-DLVZDO W. : AbdAhnageed,
$EG$OPDJHHG I., Masi
0DVL and
DQG P.
3 >@ S.
6 Raschka,
5DVFKND "Model
³0RGHO Evaluation,
(YDOXDWLRQ Model
0RGHO Selection,
6HOHFWLRQ and
DQG Algorithm
$OJRULWKP
[27]
1DWDUDMDQ "Recurrent
Natarajan, ³5HFXUUHQW Convolutional
&RQYROXWLRQDO Strategies
6WUDWHJLHV for
IRU Face
)DFH Manipulation
0DQLSXODWLRQ 6HOHFWLRQLQ0DFKLQH/HDUQLQJ´DU;LY
Selection in Machine Learning," arXiv: 1 8 1 1 . 12808, 2020.
Guarnera, 0. Giudice and S. Battiato, "Fighting Deepfake by Exposing
'HWHFWLRQLQ9LGHRV´:RUNVKRSRQ$SSOLFDWLRQVRI&RPSXWHU9LVLRQDQG
Detection in Videos," Workshop on Applications of Computer Vision and
3DWWHUQ5HFRJQLWLRQWR0HGLD)RUHQVLFVZLWK&935SS±
Pattern Recognition to Media Forensics with CVPR, pp. 80-87. >@ L.
[28] /*XDUQHUD2*LXGLFHDQG6%DWWLDWR³)LJKWLQJ'HHSIDNHE\([SRVLQJ
WKH&RQYROXWLRQDO7UDFHVRQ,PDJHV´DU;LYY
the Convolutional Traces on Images," arXiv: 2008.04095vl , 2020.
463
uthorized licensed use limited to: CENTRE FOR DEVELOPMENT OF ADVANCED COMPUTING - CDAC - PUNE. Downloaded on July 09,2024 at [Link] UTC from IEEE Xplore. Restrictions apply