0% found this document useful (0 votes)
190 views15 pages

Artificial Intelligence Security Threat, Crime, and Forensics: Taxonomy and Open Issues

The document discusses artificial intelligence (AI) security threats, foreseeable AI-related crimes, and digital forensics for AI. It defines AI crime and proposes a taxonomy of AI crime categories. The taxonomy includes AI as a tool crime, where existing crimes are expanded through AI, and AI as a target crime, which attacks AI systems. Challenges for digital forensics investigating AI crime are also addressed.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
190 views15 pages

Artificial Intelligence Security Threat, Crime, and Forensics: Taxonomy and Open Issues

The document discusses artificial intelligence (AI) security threats, foreseeable AI-related crimes, and digital forensics for AI. It defines AI crime and proposes a taxonomy of AI crime categories. The taxonomy includes AI as a tool crime, where existing crimes are expanded through AI, and AI as a target crime, which attacks AI systems. Challenges for digital forensics investigating AI crime are also addressed.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

Received September 28, 2020, accepted October 3, 2020, date of publication October 7, 2020, date of current version October

21, 2020.
Digital Object Identifier 10.1109/ACCESS.2020.3029280

Artificial Intelligence Security Threat, Crime, and


Forensics: Taxonomy and Open Issues
DOOWON JEONG
College of Police and Criminal Justice, Dongguk University, Seoul 04620, South Korea
e-mail: [email protected]

ABSTRACT Advances in Artificial Intelligence (AI) have influenced almost every field including computer
science, robotics, social engineering, psychology, criminology and so on. Although AI has solved various
challenges, potential security threats of AI algorithms and training data have been stressed by AI researchers.
As AI system inherits security threats of traditional computer system, the concern about novel cyberattack
enhanced by AI is also growing. In addition, AI is deeply connected to physical space (e.g. autonomous
vehicle, intelligent virtual assistant), so AI-related crime can harm people physically, beyond the cyberspace.
In this context, we represent a literature review of security threats and AI-related crime. Based on the
literature review, this article defines the term AI crime and classifies AI crime into 2 categories: AI as tool
crime and AI as target crime, inspired by a taxonomy of cybercrime: Computer as tool crime and Computer
as tool crime. Through the proposed taxonomy, foreseeable AI crimes are systematically studied and related
forensic techniques are also addressed. We also analyze the characteristics of the AI crimes and present
challenges that are difficult to be solved with the traditional forensic techniques. Finally, open issues are
presented, with emphasis on the need to establish novel strategies for AI forensics.

INDEX TERMS Artificial intelligence, AI crime, AI forensics, security threats, malicious AI.

I. INTRODUCTION As Brundage et al. [10] stressed the importance of the chang-


Artificial Intelligence (AI) has become essential to almost all ing threat environment, research on preventing and mitigating
areas including computer science, security engineering, crim- the dark side of AI also should be discussed and expanded
inology, psychology, and robotics. Especially, Deep Learn- seriously.
ing [1], inspired by the structure and function of the brain, In this context, we explore AI security threats, foreseeable
has been the major breakthrough in the AI field [2] and it crimes, digital forensics for AI. A literature search for the
has activated the AI study in various fields. Research on subject covered various books, journals, and conference pro-
deep learning has been studied to process a huge amount of ceedings. Due to the heterogeneous nature of AI and digital
data (e.g. pictures, medical information, social media, crime forensics, we reviewed not only forensic researches but also
information, etc.) to perform medical image analysis, speech researches of computer science, related law, criminology,
recognition, and natural language understanding [3]–[7]. etc. We used Google Scholar, IEEE Xplore, and ACM Dig-
Although the fast development of AI has brought the ital Library to search for related paper using the keywords:
benefits of innovation, it also has carried the significant ‘AI’, ‘security threat’, ‘AI crime’, ‘forensic framework’, etc.
risks [8]. This unprecedented growth reminds AI stakeholders We also looked for studies that the retrieved works cited
of the early days of Information & Communication Tech- and were cited by the retrieved works. Among the papers,
nology (ICT). When ICT evolved at high speeds in the we tried to review articles published after 2015. However,
past, unexpected problems occurred (e.g. terrorism, security we did not rule out papers published prior to 2015, as there
threat, cybercrime, privacy infringement, etc.) and it incurred would be works highly relevant to this survey. We note that
the considerable social costs. Similarly, there are growing this paper regards AI as a set of algorithms including training
concerns about various problems that the AI can cause [9]. and inference processes to mimic human intelligence.
By reviewing previous studies, we define AI crime, and
The associate editor coordinating the review of this manuscript and then propose a taxonomy for new types of crime: the AI as
approving it for publication was Charith Abhayaratne . tool crime and AI as target crime, inspired by an existing
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License.
184560 For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/ VOLUME 8, 2020
D. Jeong: AI Security Threat, Crime, and Forensics

taxonomy used in cybercrime: computer as tool crime and networks [16]. Seymour and Tully [17] presented that
computer as target crime [11], [12]. The AI as tool crime machine learning can be weaponized for social engineering;
is defined as the expansion of existing crimes, including by using AI, mass-produced messages with phishing links
traditional crime and cybercrime (e.g. advanced phishing, could be posted on Twitter without any interruption. Because
automated hacking, manipulation, fraud, etc.). The AI as the malicious socialbot is based on a specific user’s past
target crime is a new area of potential criminal activity against behaviours and public profiles, detection of the socialbot has
AI system; adversarial attack [13] is a typical example. Based become the challenge of computer security [18], [19]. From
on proposed taxonomy of AI crime, this article discusses how the social science perspective, the technique may influence
to investigate the crime; we name this process AI forensics. or inflame public opinion when malicious socialbots are
Note, in the digital forensic field, a word in front of the designed to perform a political attack [20], [21].
‘forensic’ implies the target to be analyzed (e.g. smartphone Some researchers gave warning that hackers have already
forensics, cloud forensics, memory forensics, IoT forensics). started to weaponize AI, in order to advance their cracking
Therefore, AI forensics indicates not investigation using AI skills and develop new types of cyber attack [22]. The AI is
but investigating AI. We also perform a comparative analysis utilized to sharpen techniques to commit traditional cyber-
between traditional digital forensics and AI forensics. crimes such as financial fraud, cyberterrorism, cyberextor-
In exploring the facets of AI crime, we make the following tion, etc. For example, when hackers try to voice phishing,
contributions: the hackers can deceive victims by using the realistically
• We discuss foreseeable AI-related crimes systematically imitated voices of the victims’ family or friends [23].
and practically. Whereas the above studies focused on the problems that
• We propose a taxonomy of AI crime based on compre- the specific techniques could cause, Brundage et al. [10] pre-
hensive literature review. sented a comprehensive insight into the malicious use of AI.
• We introduce challenges that digital forensics can They addressed three changes in the landscape of threats:
encounter when investigating AI crime with experiments. expansion of existing threats, the emergence of new threats,
• We highlight open issues in the field of AI forensics and and change to the typical character of threats. By the scalable
propose corresponding suggestions. To the best of our use of the AI system, the cost of tasks that require human
knowledge, this paper is the first systematic study about labor may be lowered. Perpetrators then are able to attack
AI forensics. more targets with the cost reduction techniques (e.g. mass
The rest of this paper is organized as follows. Section II spear phishing); this is the expansion of existing threats.
introduces related work and background of AI security threat, The new threats also may be emerged to complete tasks that
AI crime, and digital forensics. In Section III, we define AI are infeasible for people (e.g. imitating individuals’ voices,
as tool crime and then describe foreseeable AI-related crimes. controlling multiple drones) [24]. When the highly effective
Section IV explains AI as target crime, which attacks training attacks by AI become more common, the typical character of
system and inference system. We discuss AI forensics aiming threats will be altered. Brundage et al. also classified security
to investigate the AI crimes in Section V. Section VI high- domains into digital security, physical security, and political
lights open issues of AI forensics by comparing the traditional security. The digital security domain includes cyberattacks
forensics. Concluding remarks are drawn in Section VII. that exploit vulnerabilities of human or AI systems. The
physical security domain covers physical attacks such as
II. RELATED WORK AND BACKGROUND causing autonomous vehicles to crash and controlling thou-
As already mentioned in the introduction, AI has been stud- sands of drones. The political security domain includes novel
ied in various academics. This section describes researches threats in profiling, repression, and targeted disinformation
about AI security threat and AI-related crime from various campaigns.
perspectives. In addition, we also explore cybercrime defined King et al. [9] provided a different view about AI security
by cybersecurity and digital forensics community. threats, by using the term ‘AI crime’. They approached the
problem from a broader perspective. In the article, AI crime
A. AI SECURITY THREATS AND CRIME is categorized based on criminal behavior: commerce, finan-
The term ‘AI crime’ was firstly provided by humanities cial markets and insolvency (e.g. market manipulation, price
field [9] as the term ‘crime’ is involved with law and ethics. fixing, collusion), harmful or dangerous drugs (e.g. traf-
Although the term AI crime has not been covered in computer ficking, selling, buying, possessing banned drugs), offences
science area, several studies have stressed security threats and against the person (e.g. harassment, torture), sexual offences
malicious uses of AI that can cause various crimes. (e.g. sexual assault, promotion of sexual offence), theft and
A prime study of the malicious use of AI is about fraud, and forgery and personation (e.g. spear phishing, credit
adopting online personas, called socialbot, that behaves like card fraud). They insisted that the categorized crimes contain
human [14]. Though the initial objective of socialbot was one or more threats. When classifying AI security threats,
to advocate awareness and cooperation among people [15], they focused on human’s nature: emergence, liability, mon-
it has often been used maliciously such as phishing, fraud, itoring, and psychology. For example, the psychology threat
and political infiltration of a campaign on online social means that AI can affect a human’s mental state to the extent

VOLUME 8, 2020 184561


D. Jeong: AI Security Threat, Crime, and Forensics

of facilitating or causing crime. This approach is quite dif- the perpetrator’s past behavior to determine whether illegal
ferent from that of the computer science field; this variety of behavior occurred or not. In computer as tool crime, the crim-
perspectives is due to the inherently interdisciplinarity of AI. inals generally use known tools and manipulate familiar
Some studies focused on privacy issues of AI arising from infrastructures such as the mobile message, website, social
processing of personal data. Li and Zhang [25] presented that media, etc. On the other hand, when investigators examine
AI applications in healthcare, finance, and education may computer as target crime, they should focus on malicious
occur privacy problems. As the number and quality of training programs, called malware. To quickly respond the crime
data greatly affect the performance of AI, developers wish and ascertain the extent of the damage, they must find the
to collect as much data as possible. Li et al. insisted that the malware and then perform reverse-engineering to understand
collection of comprehensive data has inherent privacy threats. the purpose of the malware and identify the source of the
Mitrou [26] approached the privacy problem with General attack [30]–[32].
Data Protection Regulation (GDPR). The author stressed that
GDPR can be applicable to AI when AI handles personal data, 1) DIGITAL FORENSICS
though GDPR does not specifically address AI. Digital forensics is defined as ‘‘the use of scientifi-
The previous studies give three implications to stakehold- cally derived and proven methods toward the preservation,
ers in AI field. First, due to the dual-use nature of AI, collection, validation, identification, analysis, interpretation,
researchers and engineers should perceive that AI technique documentation, and presentation of digital evidence’’ [33].
may be used to commit criminal offences, even though In digital forensics area, many principles and guidelines
the technique is designed for legitimate use. Since AI is have been suggested because each country and organization
a double-edged sword, stakeholders in AI field need strict has various laws and policies. Nevertheless, they share an
professional ethics. Second, totally different types of security underlying foundation that forensic process is considered to
threats, that have not been considered so far, will emerge. be forensically sound only when it meets five principles:
As AI can complete tasks that have been regarded as impos- Meaning, Errors, Transparency and trustworthiness, Repro-
sible to be processed by people or traditional programs, ducibility, and Experience [34]–[39].
the threats will be outside the primary scope of known threats; • Meaning: The original meaning of evidence should be
AI researchers should collaborate closely with professionals unchanged; when change is inevitable, there should be
in diverse fields to prevent the security threats of AI and minimal change.
respond to AI crime. Finally, AI security area should learn • Error: Any unavoidable error in the forensic process
from trials and errors of cybersecurity area. As described in should be documented.
previous studies, the foreseeable AI crimes are very closely • Transparency and trustworthiness: The reliability and
involved in cybercrime. The cybercrime stemmed from the accuracy of the forensic process should be tested and
dual-use nature of ICT; the current situation of AI security verified.
resembles the initial phase of cybersecurity. • Reproducibility: The result of the forensic process
should show a consistent level of quality, no matter how
B. CYBERCRIME many times it is repeated under the same conditions.
Cybercrime is regarded as the dark side of cyber space [12]. • Experience: The investigators should have sufficient
It is categorized into two types, computer as target crime and experience or knowledge.
computer as tool crime [11], [27]. As information has been If the forensic process does not follow any of the five prin-
digitized and connected by network, new types of crimes, ciples, the evidence would be hard to be accepted in court.
such as cyberterrorism, cyberextortion, cyberwarfare, etc., Thus, investigators should collect and analyze the evidence
have emerged; these crimes are called computer as target while adhering to the principles.
crime. The objective of computer as target crime is dis- In addition, forensic researchers proposed a proactive
rupting or destroying computer systems. Therefore, when process that is used to manage incidents before they can
perpetrators commit the computer as target crime, they use occur [40]; the process is called Digital Forensic Readiness
tools or techniques developed to intrude computer systems (DFR). DFR aims to collect digital evidence quickly and
(e.g. viruses, worms, Trojan horses, and spyware). Mean- accurately while minimizing the cost of conducting foren-
while, all data in our daily life have been digitized from sic investigation during incident response [41]. In particular,
private area to business. This change makes offline crimes DFR has been used to mitigate business risks of losing infor-
such as fraud, threats, child abuse, stalking, etc. enter the mation assets due to security incident. As the incidents stem
online environment; it is called computer as tool crime [12]. from vulnerability of information system, DFR also plays a
Cybercrime is intimately related to cybersecurity because role in preventing or detecting cybercrimes.
most attack techniques in cybercrime are based on exploiting Similar to other fields, digital forensic researchers have
vulnerabilities of potential target [28], [29]. also studied application of AI to investigation. Karbab and
The taxonomy of cybercrime have helped to develop strate- Debbabi [42] used natural language process and supervised
gies against the crime in practice. When forensic investigators machine learning to detect malware. They achieved over 94%
examine computer as tool crime, they focus on proving f1-score in several datasets. Fidalgo et al. [43] also applied AI

184562 VOLUME 8, 2020


D. Jeong: AI Security Threat, Crime, and Forensics

to digital forensics, to classify suspicious content posted on Fake news is another example of the advanced crime.
the Dark Web. By developing the monitoring system based Although fake news has a long history in social engineer-
on AI, it made the investigation efficient. In addition, several ing [55], it has begun to get noticed recently with the
researchers have studied on forensic investigation methods advent of social network services (e.g. Twitter, Facebook,
using AI ( [39], [44], [45]), but study on AI as the subject of YouTube) [56]. In particular, the fake news has a huge effect
forensic investigation has not been published yet. on political issues such as policy decision, propaganda, and
election [57]. With the deepfake technique, the fake news gets
III. AI AS TOOL CRIME more powerful. Citron and Chesney [58] presented that fake
This section describes foreseeable AI as tool crime consid- video mimicking prominent politicians can harm individuals
ering the dual-use nature of AI. Because AI system is also by providing false information. News agencies have devel-
developed on digital infrastructure, the risk of cybercrime, oped AI anchors to enhance efficiency and reduce costs [59];
including computer as tool and target crime, is embedded It implies that it is possible to create fake news with virtual
within AI security threats. In addition, AI can be used for anchors that look like people.
physical crime by controlling autonomous devices like smart
car, drone, Internet of Things (IoT) device, etc. 2) COMPUTER AS TARGET CRIME
In this section, we explore how AI can be used to sharpen AI can complete tasks that have been previously unsolved,
cyberattacks. Then, we focus on physical crime, regarded as with even lower cost and labor. By making copies of the AI
a novel attack. system, it can have a similar effect as hiring more human
analysts. This characteristic gives attackers an opportunity
A. ENHANCED CYBERCRIME to gain unauthorized access. For example, password authen-
As described in Section II, there are two types of cybercrime: tication, the most fundamental technique of authenticating
computer as tool and target crime. They are traditional crimes users, would be under the threat. Dictionary attack, regarded
in cyberspace, but the crimes are still serious threats. By using as one of the most effective ways to obtain the password, uses
AI techniques, perpetrators can commit novel cybercrime well-known words or phrases expected to have been used in
that was considered an infeasible attack. This subsection the password [60], [61]. When creating the dictionary, social
discusses how AI techniques can be used for cybercrime. engineering technique that obtains victim’s information from
online (e.g. birthday, phone number, address, etc.) is used
1) COMPUTER AS TOOL CRIME
mostly [62]. The collecting information requires considerable
time and effort, but the AI systems designed to automate
Previous researches notified that AI can be used for phishing
social engineering can carry out the task effortlessly.
and its effectiveness are already proved [9], [10]. One of the
Automated detection techniques to find vulnerabilities
common phishing methods is scam email using profiling. The
would be a useful instrument for criminals. Russell et al. [63]
profiling using AI has been actively studied in the business
provided the potential of using AI to detect vulnerabilities.
field; targeted advertising is a typical example. However,
They demonstrated that the usage of the convolutional neural
the technique used in the targeted advertising, which is based
network (CNN) and the tree ensemble has some advantages
on the customers’ previous buying history or interest, may be
over traditional static analysis. Grieco et al. [64] presented
instrumental for the attacker. The previous researches named
a method to discover large-scale vulnerabilities. By using
the AI programme as a chatbot. Kietzmann et al. [46] and
the proposed method, programs that have vulnerability could
Paschen et al. [47] predicted that AI will enhance strategies
be identified without analyzing source code. Besides those
to scam customers by using the malicious chatbot.
studies, various methods to detect vulnerability have been
The chatbot is able to communicate with the customers
actively researched. [65]–[67]. Though they were designed
without a break; it can collect mass data related to the cus-
for the public good, perpetrators may use the techniques for
tomers’ behavior and profile. The chatbot has been already
finding vulnerable systems.
developed and used in academia and industry. In the early
days, the chatbot was mainly text-oriented [48]. However,
the chatbot has been developed to verbally converse with B. PHYSICAL CRIME
people, as Natural Language Processing (NLP) technology The AI security threat extends beyond cyberspace, partic-
has advanced [49], [50]. ularly with the widespread use of IoT [68]. By manipulat-
Whereas some studies suggested using the ability of AI’s ing AI system, a perpetrator can physically attack a target
speech conversation for the common good, such as social (e.g. human, pet, vehicle, house).
therapy [51], education [52], medical diagnosis [53], and With respect to physical crime, the ethics of AI have been
health [54], there are also concerns that AI-supported voice discussed in science ethics field. Lin et al. [69] represented
would raise theft and fraud. As the voice is one of the robot ethics with an explanation that AI robots can kill people
biometrics which is the irreplaceable measure in security with or without intention. Scherer [70] also stressed that
mechanism, it can be a great weapon for attackers (e.g. voice AI system can cause harm physically and there are arising
phishing) [23]. challenges from difficulties in assigning moral and legal

VOLUME 8, 2020 184563


D. Jeong: AI Security Threat, Crime, and Forensics

FIGURE 1. The proposed Taxonomy of the AI crime.

responsibility for the harm. The studies focused on harms that Control Units (ECU) installed in the Twizy. Though there
occurred by malfunction of autonomous devices. was a limitation that the proposed method is only appli-
On the other hand, AI inherently designed to attack phys- cable when the car is turned on, they presented that the
ical targets has also been developed for military use; it is attacker can remotely control the vehicle system after hack-
called military AI [71]. The military AI is developed for ing. Martinelli et al. [75] also presented the vulnerability of
the public good, however, it can also be used as a tech- the Controller Area Network (CAN) protocol, regarded as the
nique to harm people on the outside of a military con- standard for the in-vehicle network. Based on the vulnerabil-
text [10]; the drone swarm is a typical example. To operate ity, they were able to perform the message injection attack
the drone swarm, the following requirements should be met, to cause malfunctions of ECU. The use of these cyberattack
according to [72], [73]. with the swarm technology is a serious threat as it can cause
• Autonomous (not under centralized control) a lot of damage to physical space.
• Capable of sensing their local environment and other
nearby swarm participants IV. AI AS TARGET CRIME
• Able to communicate locally with others in the swarm This article defines AI as target crime as an offence causing
• Able to cooperate to perform a given task damage or impairment in processing data or operating AI
Algorithms by traditional programming were difficult to system. This definition is inspired by a definition of computer
meet the requirements, but the progress in AI enables as target crime from [76].
swarming. There are various AI systems, but the underlying concept
This swarming technology is applicable to robotic systems of most AI systems can be expressed in Fig. 2. The AI system
or vehicles; it may amplify the synergy with computer as consists of training system and inference system. The training
target crime. Several studies already showed that it may system generates a trained model based on training dataset.
be possible to remotely manipulate vehicles by exploiting The trained model is used at inference system to classify the
vulnerabilities. Jafarnejad et al. [74] proposed possible attack new data from endpoints. For instance, in Fig. 2, the training
scenarios on Renault Twizy 80, electronic car, by exploiting system creates an algorithm that distinguishes dogs from
vulnerabilities of Sevcon Gen4 controller that is Electronic cats. The inference system loads the algorithm and then it

184564 VOLUME 8, 2020


D. Jeong: AI Security Threat, Crime, and Forensics

FIGURE 2. The structure of AI system.

determines whether the object image obtained from sensors The impersonate attack means that imitated data samples
is a dog or cat. that are able to wrongly classify the original samples are
AI as target crime is committed primarily based on security input to the inference system [89]. The inversion attack is
threats of AI system. Several articles dealt with taxonomy of applied to the output of the AI system to infer certain fea-
the security threats; white-box and black-box attacks are the tures of the input [90]. Papernot et al. [91] also proposed a
typical threat model. The attack with knowledge of dataset, comprehensive insight into the threat model. They presented
architecture, and parameters of targeted AI system is called the attack surface of AI systems, the trust model, adversarial
the white-box attack. Whereas the black-box attack contains capabilities, and adversarial goals. Adversarial settings on
little or no information about the structure of the targeted the training and inference system were also addressed. The
system [77]–[79]. model targets the integrity, privacy, and confidentiality of the
Some studies proposed threat models on adversarial exam- training system. They also represent white-box and black-box
ple(AE)s. The AEs are input data with invisible noise, adversaries of the inference system.
in order to misclassify the input and degrade the perfor- Referring to previous efforts, this article presents AI as
mance of AI [2]. They focused on impacts when malicious target crime from the perspective of the victims. We focus
data is injected in training phase [80]–[82] or in inference on AI system including the training system and inference
phase [83]–[85]. The experiments demonstrated the perfor- system, which would be targets of the attacks mentioned
mance reduction of AI system attacked by AEs such as mal- above.
ware detection, facial recognition, intrusion detection, etc.
Some researches categorized the security threats against A. TRAINING SYSTEM AS TARGET CRIME
the training phase and inference phase. Liu et al. [86] sur- Since the training system in practical AI system is protected
veyed a variety of security threats and categorized them with high confidentiality and not developed in common com-
into the poisoning, evasion, impersonate, and inversion puter system [86], direct access to the training system seems
attacks. The poisoning attack is performed in the training hard to be achieved. However, it may be accomplished by
phase, meaning an attacker injected AEs to the training insider spy, advanced persistent threat (APT), or malicious
data set [80], [87]. The evasion, impersonate, and inver- external storage (e.g. USB, external hard drive). If the security
sion attacks behave in the inference phase. The evasion of the training system is compromised, there would be consid-
attack means that an attacker deteriorates the security of erable damage to the AI system. Particularly, the training sys-
target systems by using AEs that can evade detection [88]. tem includes a training dataset that significantly influences

VOLUME 8, 2020 184565


D. Jeong: AI Security Threat, Crime, and Forensics

the performance of learning model; this is the very reason of the classifier is obviously compromised by providing con-
that crimes against the training system would be catastrophic. taminated attacks to training system.
The following section describes the AI crimes by assuming Logic corruption is the most serious crime in the training
that attackers have already intruded on the training system system. The criminals can manipulate the architecture and
because investigation of the intrusion is the domain of tradi- parameters of the trained model by tampering the learn-
tional cyber forensics. ing algorithm [91]. For example, when the CNN system is
attacked and then corrupted, the attacker can handle the input
1) TRAINING SYSTEM ATTACK layer, classification layer, and training options.
The purpose of this crime is to reduce confidence of AI.
By injecting AEs or modifying the existing dataset, AI may 2) TRAINING SYSTEM THEFT
misclassify new data from the inference system. If a perpetra- Training system includes training dataset, learning model,
tor can manipulate the learning algorithm, named logic cor- and trained model. As they are directly related to the perfor-
ruption [91], there would be relatively more critical damage mance of AI, AI developers and manufacturers of AI-related
to the training system. This article classify the training system products consider the training system as trade secrets.
attack into three categories: data injection, data modification, The dataset is very important for AI stakeholders [100].
and logic corruption. They create dataset by collecting data from various
Data injection crime disrupts the availability of AI system sources including open-source data (e.g. driving-related
via injecting AEs. Goodfellow et al. [92] presented that AI data [101], [102] and object data [103], [104]). Since the
erroneously recognizes a panda’s picture as gibbon by adding making dataset takes considerable time and labor, it has high
a noise that people cannot perceive. The objective of AEs is economic value. For his reason, the dataset is favourite target
to find the smallest perturbation deceiving AI. for perpetrators. Indeed, serious privacy infringement may
occur if perpetrators steal private data such as medical image,
xE∗ = xE + arg min{Ez : Õ(Ex + Ez) 6 = Õ(Ex )} (1) face image, and voice [105], [106].
The x is original data and z is a perturbation that is the The learning model and trained model are also impor-
noise added to the original data to make it an AE x ∗ . tant assets for AI developers because they are designed
The O is an oracle, which is a system that responds to with know-how, insight, and expertise. Through this crime,
every unique query, mainly used in the cryptography com- the algorithm, distribution of training data, and parameters of
munity [91] The method to generate and utilize AEs have fully trained model architecture could be leaked to the adver-
been actively studied, especially for image recognition (See saries or public. In particular, the information may also be
Fig. 3). Nguyen et al. [93] proposed a methodology to pro- abused for white-box attack or black-box attack partially [2].
duce AEs totally unrecognizable to human eyes by using the
B. INFERENCE SYSTEM AS TARGET CRIME
multi-dimensional archive of phenotypic elites (MAP-Elites).
In addition, Eykholt et al. [94] presented that AEs can be Perpetrators may also attack inference system. Comparing
applied to physical space (e.g. self-driving car) by proposing to the attack to the training system, perpetrators can access
a possible attack to misclassify signs on the road. Several the inference system relatively easily, because the inference
studies also have represented that AEs can be generated and system is usually implemented at end devices. The crime
utilized for disturbing malware detection [95], [96] and intru- targeting inference system does not interfere with the learning
sion detection [97], [98]; the forensic investigators should be model, however, it can cause the leakage of the trained model
aware of data injection crime. or malfunction of classification.

1) INFERENCE SYSTEM CRACKING


The parameters, which have been determined at the train-
ing phase, play an important role in the inference system.
There are two types of operation methods depending on the
location of the parameters: centralized and distributed model
(See Fig. 4).
In the centralized model, a central server developed by an
AI provider takes the inference operation. For example, when
FIGURE 3. Example of adversarial examples (AEs). using a face recognition system developed for the centralized
model, the role of end device (e.g. smartphone, IoT device,
If perpetrators have permission to modify or delete some in-vehicle infotainment) is to send face images or extracted
training data, they can perform fatal attacks to AI system; it features to the central server, and then the server processes
is the data modification crime. Zhao et al. [79] presented that the image or feature. The end devices operate based on the
label contamination attack (LCA) can significantly reduce the results processed by the server.
performance of AI by changing the labels of some training The centralized model is theoretically appropriate for
data. Hayes and Ohrimenko [99] showed that the accuracy maintenance and security of AI service, but it may be less

184566 VOLUME 8, 2020


D. Jeong: AI Security Threat, Crime, and Forensics

FIGURE 4. Comparison between the centralized model (left side) and distributed model (right side) in AI system.

useful in practice as it may cause a bottleneck. For this On the other hand, black-box attack is performed with
reason, the distributed model is being used more in practical restricted information or without the knowledge of the
fields. In the distributed model, the parameters determined in AI model. The black-box attack can be categorized into
training system are managed in the inference system so that non-adaptive black-box attack, adaptive black-box attack,
end devices take the operation of processing image, in the and strict black-box attack.
example of face recognition. With the emergence of Internet Having the knowledge of the distribution of training data,
of Thing (IoT), the use of the distributed model for AI service the non-adaptive black-box attackers can collect alternative
is taken for granted [107], [108]; the relationship between dataset with the distribution although they can not figure out
central and distributed model is similar to that of cloud and the architecture or structure of the target AI model. They can
fog computing. actually make AEs based on their local AI model trained
The trend relying on the distributed model is beneficial by the alternative dataset. Generative adversarial networks
for perpetrators who targets the parameters. By using tra- (GAN) [111] is an example of the non-adaptive black-box
ditional hacking techniques (e.g. reverse engineering, side attack.
channel attack), the trained model would be identified and The adaptive black-box attackers use input-output pairs by
then manipulated [109]. Indeed, this crime may enable per- querying the targeted AI model. This attack is often likened
petrators to perform white-box attack. to the oracle attack explained in Section IV-A1. Through
collecting amounts of query data, the attackers can identify
2) INFERENCE SYSTEM ABUSE labels of queried data and then may reconstruct the model
The abuse on the inference system is a crime that causes with the queried data corpus [112].
misclassification using AEs. The perpetrators may identify The strict black-box attack is also based on collecting
the learning model and its parameters through cracking the input-output pairs, but this attack is more restricted than the
end device or noticing that the target system uses com- adaptive black-box because the attackers can not issue queries
mon libraries of open-source project [110]. According to the to the inference system. Therefore, they should attack AI
degree of understanding knowledge about the target system, system without the oracle. Nevertheless, it may be powerful if
the abuse is classified into white-box attack and black-box the attackers obtain many input-output pairs and find a pattern
attack. or distribution of them [113].
The perpetrators trying to white-box attack have knowl-
edge of the AI model and its parameters. Based on their V. AI FORENSICS
knowledge, they can simulate the targeted AI model by imi- Forensic investigators should collect and analyze evidence
tating the AI system and make a fake training dataset as the to identify 5W1H (when and where the crime is committed,
perpetrators already know the distribution of training data. who is criminal, what is targeted, why the criminal com-
The typical example of white-box attack is AEs, which is mit, and how the crime occurs). As the method of collect-
already described in Section IV-A1. ing evidence and the type of digital data are heterogeneous

VOLUME 8, 2020 184567


D. Jeong: AI Security Threat, Crime, and Forensics

TABLE 1. The AI crimes and AI Forensics.

depending on the device or platform, forensic researchers


have presented the challenges and solutions for forensic
sub-fields such as smartphone forensic [114], [115], cloud
forensics [116], [117], and IoT forensics [37], [38].
In this section, we suggest future research directions
of AI forensics to investigate the AI crimes described in
Section III and IV. Considering characteristic of AI system
and techniques used for committing AI crimes, we describe
4 main parts of AI forensics that have been not covered
in the forensic community: AI exploration, similarity anal-
ysis, adversarial attack detection, and damage assessment.
AI forensics is currently in the beginning phase, so the
research topics will inspire forensic researchers. Table 1 sum-
FIGURE 5. Traditional programming and artificial intelligence learning
marizes the AI forensics challenges against the AI crimes. inspired by [118].

A. AI EXPLORATION Another issue in practical AI forensics is complexity.


When investigating AI as tool crime, it is needed to identify Generally, an AI system uses various algorithms and libraries.
how AI is used in the crime. The investigators should collect Some AI systems rely on a mixture of the application pro-
and analyze the dataset, learning model, trained model, infer- gram interfaces (APIs) provided by existing AI systems for
ence model, and application of the AI system used to commit their efficiency. This complexity refers that the investigators
a crime. Based on the examination, investigators should also should have knowledge of AI and skill for reverse engineering
grasp the purpose of the AI. In this context, identifying a the AI system.
difference between the intention of developer and the result The fact that only some elements of the AI system can be
of AI is important for investigators. collected whirls forensic stakeholders. It is becoming nearly
Unlike traditional programming, AI program often result impossible to collect all elements such as training system,
unintended consequences. Fig. 5 shows relationship between learning model, dataset, trained model, and inference system,
input, output, and program in traditional programming due to technical and legal issues. Because perpetrators try
and AI. In traditional programming, data and program is pro- to destroy traces of crime, a study is needed to identify
cessed on the computer to produce the output; otherwise, data the structure and activity history of AI using only limited
and output are used to create a program in AI. In particular, information. The limited collection of evidence makes it
parameters of AI are often determined with some randomness difficult for investigators to reproduce the AI system that must
because many AI models use random weights in the learning be investigated. It is a important issue for the investigators,
phase. Therefore, even if same dataset and learning model are because, as described in Section II-B1, the reproducibility is
given, it may create programs that have different parameters one of the key principles of digital forensics. To show that it is
and result different outputs. It means that it is hard to prove difficult to reproduce the past state of AI system with limited
whether AI was actually used as a weapon, how AI was collection data, we presents a simple experiment.
used, and how much damage AI caused, because investigators An experiment was conducted using a i7-8700 processor
would fail to reproduce the case. and a Nvidia GeForce 1070 Ti graphic card. We trained

184568 VOLUME 8, 2020


D. Jeong: AI Security Threat, Crime, and Forensics

several models to categorize binary file into Malware or


Benign. Assuming that the investigator had only collected
only a portion of the dataset, we trained models according to
the size of the dataset. The 1,000 PE files for each category
were used as dataset. The 500 Malware files are collected
from VirusShare,1 which is publicly-available repository. The
500 Benign files are collected from Software Informer,2
which is the most trustworthy sources-provider of benign
files, and from system directories created when Windows
10 is newly installed. The model was based on Convolutional
Neural Netowrk (CNN) and a voting-based ensemble tech-
nique was used to improve the performance for the model.
The upper side of Fig. 6 shows accuracy of the models with
variations of the size of the dataset. In order to identify perfor-
mance changes in dataset selection, 60 percent of dataset were
randomly selected 10 times, and then we trained models with
the selected data. The under side of Fig. 6 shows accuracy of
the models. It is shown that there is a deviation in accuracy
depending on the data selected for training.
The experimental results show that it is impractical to
reproduce the AI system with limited evidence. Indeed,
because many AI systems adopt transfer learning where
pre-trained models are used as the starting point, obtaining
origin data will become more challenging. Therefore, techni-
cal and policy approach to overcome the challenge should be
studied.

B. SIMILARITY ANALYSIS
Investigating violations, such as copyright infringement,
leaking of the confidential document, and invasion of pri-
vacy, is a traditional field of digital forensics. In these cases,
similarity analysis is one of the most important methods to
identify the criminal activities (e.g. code plagiarism detec-
tion [119], document similarity [120], and digital image FIGURE 6. Comparison of the accuracy of the trained models.

similarity [45]).
As described in Section III and IV, the training dataset,
trained model, and learning model would be stolen by perpe- For example, Python packages used for AI development, such
trators. In this context, previous studies for similarity analysis as Keras and PyTorch, store and manage the trained model
can be used to compare between original dataset and suspi- and parameters as a HDF5 file that is a binary data format
cious dataset as the dataset consists of data type researched unexplored in the forensics field [122]. In particular, the file
(e.g. image, text, audio, video, etc. [44], [45], [121]). is used to distribute the updated model to their edge devices
Identifying the similarity between two models is more in the distributed model described in Section IV-B1. There-
complex when the models are based on a specific dataset. fore, similarity comparison for models is essential to resolve
If the investigators could not collect the specific dataset, ver- infringement case, but the existing similarity algorithms are
ifying or testing the models is more exhausting. For example, cannot be applied to the trained models.
AI developed to distinguish or mimic a specific person, would To describe this challenge, we conduct similarity detection
not be able to be validated or tested, if the investigators can for the 10 trained models created in Section V-A. Because
not obtain training data for some reasons (e.g. the person’s some of the training data was shared and the same learn-
rejection, death, or disappearance). To respond to the theft, ing model was applied, we say that the models are simi-
similarity analysis for AI with or without a training dataset lar. We extract the trained models as HDF5 files and then
should be studied. calculate the probability of similarity between the files by
A study for a file format that stores trained model is using ssdeep [123] and sdhash [124] that are widely used
also one of the important research area in AI forensics. in the digital forensics field. As seen in Table 2, results of
ssdeep are all zero and results of sdhash shows smaller
1 https://virusshare.com/ (last accessed 28 September 2020) than three; the algorithms determine that the models are
2 https://software.informer.com/ (last accessed 28 September 2020) not similar because the threshold of sdhash is generally

VOLUME 8, 2020 184569


D. Jeong: AI Security Threat, Crime, and Forensics

TABLE 2. Results of ssdeep and sdhash matches. In each cell, the left figure represents ssdeep comparison result, and the right figure represents
sdhash comparison result. For example, ‘0-1’ means that ssdeep score is 0 and sdhash score is 1.

21 [125]. This experiment shows the limitation of existing i ∈ 1..n is modeled using a layer of neurons and each layer
algorithms and identifies the need for new algorithm to cal- is parameterized by a weight vector θi . A DNN model F that
culate similarity between trained models. is computed as follow:
F(Ex ) = fn (θn , fn−1 (θn−1 , . . . f2 (θ2 , f1 (θ1 , xE)))) (2)
C. ADVERSARIAL ATTACK DETECTION
With regard to DFR introduced in Section II-B1, it is impor- Assuming that AEs were already injected to the target dataset
tant to prevent or detect the adversarial attack proactively. and investigators have knowledge of training F and dataset
Many researchers proposed defence methods that attempt of input-output pairs (Ex , Ey), the AEs can be found using
to classify AEs correctly, but the methods have been being backward elimination method as follow:
defeated by newly developed attacks [126]–[128]. X
k ∗ = arg min{

Fj (xEj ) − yEj } (3)
As it is difficult to defend against adversarial attacks, recent
works have attempted to detect AEs instead [129]. Some ¬(j=k)

works approached the problem statistically like two-sample The k could be a single or collection of samples. The model
hypothesis testing [130], principal component analysis Fj means trained DNN model without xj , fj , and θj . The
(PCA) [131], and Bayesian uncertainty estimates [132]. example calculates the prediction error as the number of
Methods that use an additional neural network [133], [134] misclassification.
or an external model [135], [136] were also proposed. However, the approach is difficult to apply practical foren-
The several techniques detect the adversarial attacks sic investigation because it requires to calculate equation (2)
known at the time, but state-of-the-art AEs attacks that neu- and (3) n times. Because the AI algorithm including DNN has
tralize the detection techniques have been also developed. a large amount of training data, it is practically impossible to
Because neural-network based classifiers have inherent vul- calculate the impact of each sample. The investigators also
nerability that leads to misclassification, it is fundamentally may not be able to obtain information about all samples of
impossible to prevent current and foreseeable attacks. There- the dataset or AI model; this situation further complicates
fore, making it difficult and time consuming to create AEs the problem. Therefore, it is a significant challenge to find
is considered as an alternative [137]. Nevertheless, a threat an optimized method to identify AEs and calculate damage
of adversarial attacks still exists during incremental training. with limited knowledge of the AI model.
The aim of incremental learning is to adapts to new data
and to improve the model continuously, an attacker has an VI. DISCUSSION
opportunity to insert AEs into AI system by deceiving AEs as Based on our observation from surveying the security threat
Benign. In the current situation, as the technique of making of AI and exploring foreseeable AI crimes, this section high-
AEs will become more sophisticated, enhancing detection lights open issues in the context of AI forensics through
technique remains on open issue for forensic researchers. comparison with traditional forensics. Table 3 shows related
issues on the principles of traditional forensics.
D. DAMAGE ASSESSMENT
The extent of the damage caused by AI crime should be ascer- A. LARGE-SCALE
tained by the forensic investigators. With respect to attack Generating an AI model requires considerable resources
using AEs, the investigators need to identify which data are and data, which would have been unimaginable before.
AEs, how many AEs were actually injected, and how much it This large-scale nature makes it even difficult to find data
affected the confidence. for investigators using forensic tools programmed in tra-
Theoretically, the finding AEs is to identify data that raise ditional computing. Even in traditional digital forensics,
the prediction error. We explain the process with the deep the large-scale issue has been dealt with, but a much larger
neural network (DNN) as an example. DNN uses a hierar- number of data should be covered in AI forensics. Particu-
chical composition of n parametric functions fi . Each fi for larly, current AIs mainly focus on multimedia data like image

184570 VOLUME 8, 2020


D. Jeong: AI Security Threat, Crime, and Forensics

TABLE 3. Comparative Table of traditional Forensics and AI Forensics.

or sound, which are even still challenges in digital forensics This paper have studied foreseeable AI related crimes.
field. At the current level of forensic technology, the basic Based on the literature review of security threats of AI and
forensic process that collects the evidence at the scene of AI-related crime, we have identified that the previous studies
the crime and then analyzes the evidence in lab is needs to focused on the malicious use of AI to sharpen existing crim-
be adjusted to accommodate the AI system environment in inal techniques or the vulnerabilities of AI algorithms and
practice. training dataset. We have also presented that existing crimes
would be more powerful with AI and new types of crimes may
be appeared, which have not been identified before. To cope
B. IRREPRODUCIBILITY
with the AI crime, this paper have provided a systematic
The fact that the AI systems have inherent unpredictability taxonomy for AI crime: AI as tool crime and AI as target
would influence the forensic principles. Most AI algorithms crime.
use random values partially or completely; this nature often Furthermore, we have represented the novel strategies
fails to satisfy the reproducibility of the forensic principles. against the AI crime, named AI forensics. By providing
If the reproducibility is strictly applied to the evidence of the comparative analysis of AI forensics and traditional forensics,
AI crime such as copycat model and AEs, it can be rejected we have found that some principles of digital forensics are not
in the court as the evidence may not reproduce the situation suitable for AI forensics.
at the time of the incident. Nevertheless, applying the repro- Future works and open issues of AI forensics that inspire
ducibility principles must be considered carefully, because it forensic researchers to better understand challenges to face
may trigger new issues like arresting wrong suspect. There- have been suggested. We hope that this article can serve
fore, a compromise between strict and tolerant appliance of as a valuable reference for researchers in digital forensics,
the principles should be discussed among forensic examiners, security engineering, computer science, and criminology.
policymakers, and AI professionals.
REFERENCES
C. EXPERTISE [1] Y. LeCun, Y. Bengio, and G. Hinton, ‘‘Deep learning,’’ Nature, vol. 521,
Finally, forensic stakeholders also need to develop their no. 7553, pp. 436–444, 2015.
expertise in AI. To address challenges of AI forensics, they [2] N. Akhtar and A. Mian, ‘‘Threat of adversarial attacks on deep learning
in computer vision: A survey,’’ IEEE Access, vol. 6, pp. 14410–14430,
must have a clear understanding of AI crime and AI foren- 2018.
sic techniques. As forensic investigators should understand [3] G. Hinton, L. Deng, D. Yu, G. Dahl, A.-R. Mohamed, N. Jaitly, A. Senior,
traditional programming (e.g. memory structure, compiler, V. Vanhoucke, P. Nguyen, T. Sainath, and B. Kingsbury, ‘‘Deep neural
networks for acoustic modeling in speech recognition: The shared views
assembly language) when analyzing malware [138], they of four research groups,’’ IEEE Signal Process. Mag., vol. 29, no. 6,
need to have the background knowledge about AI system, pp. 82–97, Nov. 2012.
AI structure, and AI environment to suggest probable solu- [4] I. Sutskever, O. Vinyals, and Q. V. Le, ‘‘Sequence to sequence learning
with neural networks,’’ in Proc. Adv. Neural Inf. Process. Syst., 2014,
tions for the AI forensic challenges. To achieve this, forensic pp. 3104–3112.
researchers should be interested in AI and collaborate with [5] G. Litjens, T. Kooi, B. E. Bejnordi, A. A. A. Setio, F. Ciompi,
AI stakeholders. M. Ghafoorian, J. A. W. M. van der Laak, B. van Ginneken, and
C. I. Sánchez, ‘‘A survey on deep learning in medical image analysis,’’
Med. Image Anal., vol. 42, pp. 60–88, Dec. 2017.
VII. CONCLUSION [6] A. Adadi and M. Berrada, ‘‘Peeking inside the black-box: A sur-
vey on explainable artificial intelligence (XAI),’’ IEEE Access, vol. 6,
AI is becoming widely used in various systems and appli- pp. 52138–52160, 2018.
cations. Due to the dual-use nature, there are also growing [7] S. V. Albrecht and P. Stone, ‘‘Autonomous agents modelling other agents:
concerns that AI can be harmful to people. To perform illegal A comprehensive survey and open problems,’’ Artif. Intell., vol. 258,
pp. 66–95, May 2018.
activities, perpetrators may use AI maliciously or attack AI
[8] V. C. Müller and N. Bostrom, ‘‘Future progress in artificial intelligence:
system by exploiting the inherent vulnerabilities of the victim A survey of expert opinion,’’ in Fundamental Issues of Artificial Intelli-
AI system. gence. Cham, Switzerland: Springer, 2016, pp. 555–572.

VOLUME 8, 2020 184571


D. Jeong: AI Security Threat, Crime, and Forensics

[9] T. King, N. Aggarwal, M. Taddeo, and L. Floridi, ‘‘Artificial intelligence [34] L. Pan and L. Batten, ‘‘Reproducibility of digital evidence in foren-
crime: An interdisciplinary analysis of foreseeable threats and solutions,’’ sic investigations,’’ in Proc. 5th Annu. Digit. Forensic Res. Workshop
SSRN Electron. J., vol. 26, no. 1, pp. 1–32, 2019. (DFRWS), 2005, pp. 1–8.
[10] M. Brundage et al., ‘‘The malicious use of artificial intelligence: Fore- [35] R. McKemmish, ‘‘When is digital evidence forensically sound?’’ in Proc.
casting, prevention, and mitigation,’’ 2018, arXiv:1802.07228. [Online]. IFIP Int. Conf. Digit. Forensics. Boston, MA, USA: Springer, 2008,
Available: http://arxiv.org/abs/1802.07228 pp. 3–15.
[11] S. Gordon and R. Ford, ‘‘Cyberterrorism?’’ Comput. Secur., vol. 21, no. 7, [36] M. Damshenas, A. Dehghantanha, and R. Mahmoud, ‘‘A survey on digital
pp. 636–647, 2002. forensics trends,’’ Int. J. Cyber-Secur. Digit. Forensics, vol. 3, no. 4,
[12] M. Yar and K. F. Steinmetz, Cybercrime and Society. Newbury Park, CA, pp. 209–235, 2014.
USA: Sage, 2019. [37] M. Conti, A. Dehghantanha, K. Franke, and S. Watson, ‘‘Internet of
[13] S.-M. Moosavi-Dezfooli, A. Fawzi, O. Fawzi, and P. Frossard, ‘‘Universal Things security and forensics: Challenges and opportunities,’’ Future
adversarial perturbations,’’ in Proc. IEEE Conf. Comput. Vis. Pattern Gener. Comput. Syst., vol. 78, pp. 544–546, Jan. 2018.
Recognit., Jul. 2017, pp. 1765–1773. [38] J. Hou, Y. Li, J. Yu, and W. Shi, ‘‘A survey on digital forensics in Internet
[14] Y. Boshmaf, I. Muslukhov, K. Beznosov, and M. Ripeanu, ‘‘Key chal- of Things,’’ IEEE Internet Things J., vol. 7, no. 1, pp. 1–15, Jan. 2020.
lenges in defending against malicious socialbots,’’ presented at the 5th [39] F. Amato, A. Castiglione, G. Cozzolino, and F. Narducci, ‘‘A semantic-
USENIX Workshop Large-Scale Exploits Emergent Threats, 2012. based methodology for digital forensics analysis,’’ J. Parallel Distrib.
[15] R. W. Gehl and M. Bakardjieva, ‘‘Socialbots and their friends,’’ in Social- Comput., vol. 138, pp. 172–177, Apr. 2020.
bots and Their Friends. Evanston, IL, USA: Routledge, 2016, pp. 17–32. [40] V. R. Kebande and H. S. Venter, ‘‘Novel digital forensic readiness tech-
[16] S. Rathore, P. K. Sharma, V. Loia, Y.-S. Jeong, and J. H. Park, ‘‘Social nique in the cloud environment,’’ Austral. J. Forensic Sci., vol. 50, no. 5,
network security: Issues, challenges, threats, and solutions,’’ Inf. Sci., pp. 552–591, Sep. 2018.
vol. 421, pp. 43–69, Dec. 2017. [41] R. Rowlingson, ‘‘A ten step process for forensic readiness,’’ Int. J. Digit.
[17] J. Seymour and P. Tully, ‘‘Weaponizing data science for social engineer- Evidence, vol. 2, no. 3, pp. 1–28, 2004.
ing: Automated E2E spear phishing on Twitter,’’ Black Hat USA, vol. 37, [42] E. B. Karbab and M. Debbabi, ‘‘MalDy: Portable, data-driven mal-
pp. 1–39, Aug. 2016. ware detection using natural language processing and machine learn-
[18] C. A. Davis, O. Varol, E. Ferrara, A. Flammini, and F. Menczer, ing techniques on behavioral analysis reports,’’ Digit. Invest., vol. 28,
‘‘BotOrNot: A system to evaluate social bots,’’ in Proc. 25th Int. Conf. pp. S77–S87, Apr. 2019.
Companion World Wide Web, 2016, pp. 273–274. [43] E. Fidalgo, E. Alegre, L. Fernández-Robles, and V. González-Castro,
[19] E. Ferrara, O. Varol, C. Davis, F. Menczer, and A. Flammini, ‘‘The rise ‘‘Classifying suspicious content in Tor darknet through semantic attention
of social bots,’’ Commun. ACM, vol. 59, no. 7, pp. 96–104, Jun. 2016. keypoint filtering,’’ Digit. Invest., vol. 30, pp. 12–22, Sep. 2019.
[20] N. Abokhodair, D. Yoo, and D. W. McDonald, ‘‘Dissecting a social bot- [44] W. Anwar, I. S. Bajwa, M. A. Choudhary, and S. Ramzan, ‘‘An empirical
net: Growth, content and influence in Twitter,’’ in Proc. 18th ACM Conf. study on forensic analysis of urdu text using LDA-based authorship
Comput. Supported Cooperat. Work Social Comput., 2015, pp. 839–851. attribution,’’ IEEE Access, vol. 7, pp. 3224–3234, 2019.
[21] M.-A. Rizoiu, T. Graham, R. Zhang, Y. Zhang, R. Ackland, and L. Xie, [45] O. Mayer and M. C. Stamm, ‘‘Forensic similarity for digital images,’’
‘‘#DebateNight: The role and influence of socialbots on Twitter during IEEE Trans. Inf. Forensics Security, vol. 15, pp. 1331–1346, 2020.
the 1st 2016 us presidential debate,’’ in Proc. 12th Int. AAAI Conf. Web [46] J. Kietzmann, J. Paschen, and E. Treen, ‘‘Artificial intelligence in adver-
Social Media, 2018. tising: How marketers can leverage artificial intelligence along the con-
[22] G. Dvorsky. Hackers Have Already Started to Weaponize Artificial Intelli- sumer journey,’’ J. Advertising Res., vol. 58, no. 3, pp. 263–267, 2018.
gence. Gizmodo.com. [Online]. Available: https://gizmodo.com/hackers- [47] J. Paschen, M. Wilson, and J. J. Ferreira, ‘‘Collaborative intelligence:
have-already-started-to-weaponize-artificial-in-1797688425 How human and artificial intelligence create value along the B2B sales
[23] O. Bendel, ‘‘The synthetization of human voices,’’ AI & Soc., vol. 34, funnel,’’ Bus. Horizons, vol. 63, no. 3, pp. 403–414, May 2020.
no. 1, pp. 83–89, 2019. [48] L. Cui, S. Huang, F. Wei, C. Tan, C. Duan, and M. Zhou, ‘‘SuperAgent:
[24] G. Allen and T. Chan, Artificial Intelligence and National A customer service chatbot for E-commerce websites,’’ in Proc. ACL,
Security. Belfer Center for Science and International Affairs. Syst. Demonstrations, 2017, pp. 97–102.
Cambridge, MA, USA: Belfer Center for Science and International [49] S. A. Abdul-Kader and D. John, ‘‘Survey on chatbot design techniques
Affairs, 2017. [Online]. Available: https://www.belfercenter. in speech conversation systems,’’ Int. J. Adv. Comput. Sci. Appl., vol. 6,
org/sites/default/files/files/publication/AI%20NatSec%20-%20final.pdf no. 7, 2015.
[25] X. Li and T. Zhang, ‘‘An exploration on artificial intelligence appli- [50] L. Ciechanowski, A. Przegalinska, M. Magnuski, and P. Gloor, ‘‘In
cation: From security, privacy and ethic perspective,’’ in Proc. IEEE the shades of the uncanny valley: An experimental study of human–
2nd Int. Conf. Cloud Comput. Big Data Anal. (ICCCBDA), Apr. 2017, chatbot interaction,’’ Future Gener. Comput. Syst., vol. 92, pp. 539–548,
pp. 416–420. Mar. 2019.
[26] L. Mitrou, Data Protection, Artificial Intelligence and Cognitive [51] S. D’Alfonso, O. Santesteban-Echarri, S. Rice, G. Wadley, R. Lederman,
Services: Is the General Data Protection Regulation (GDPR) C. Miles, J. Gleeson, and M. Alvarez-Jimenez, ‘‘Artificial intelligence-
‘Artificial Intelligence-Proof’?. SSRN, 2018. [Online]. Available: assisted online social therapy for youth mental health,’’ Frontiers Psy-
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3386914 chol., vol. 8, p. 796, Jun. 2017.
[27] K. Dashora, ‘‘Cyber crime in the society: Problems and preventions,’’ [52] F. Clarizia, F. Colace, M. Lombardi, F. Pascale, and D. Santaniello,
J. Alternative Perspect. Social Sci., vol. 3, no. 1, pp. 240–259, 2011. ‘‘Chatbot: An education support system for student,’’ in Proc. Int. Symp.
[28] S. Schjolberg and S. Ghernaouti-Helie, ‘‘A global treaty on cybersecurity Cyberspace Saf. Secur. Cham, Switzerland: Springer, 2018, pp. 291–302.
and cybercrime,’’ Cybercrime Law, vol. 97, 2011. [Online]. Available: [53] S. Divya, V. Indumathi, S. Ishwarya, M. Priyasankari, and S. K. Devi,
http://pircenter.org/kosdata/page_doc/p2732_1.pdf ‘‘A self-diagnosis medical chatbot using artificial intelligence,’’ J. Web
[29] T. Tropina and C. Callanan, Self- and Co-regulation in Cybercrime, Develop. Web Designing, vol. 3, no. 1, pp. 1–7, 2018.
Cybersecurity and National Security. Cham, Switzerland: Springer, 2015. [54] J. L. Z. Montenegro, C. A. da Costa, and R. da Rosa Righi, ‘‘Survey of
[30] M. Brand, C. Valli, and A. Woodward, ‘‘Malware forensics: Discovery of conversational agents in health,’’ Expert Syst. Appl., vol. 129, pp. 56–67,
the intent of deception,’’ J. Digit. Forensics, Secur. Law, vol. 5, no. 4, p. 2, Sep. 2019.
2010. [55] J. M. Burkhardt, ‘‘History of fake news,’’ Library Technol. Rep., vol. 53,
[31] C. H. Malin, E. Casey, and J. M. Aquilina, Malware Forensics Field no. 8, pp. 5–9, 2017.
Guide for Windows Systems: Digital Forensics Field Guides. Amsterdam, [56] D. M. Lazer, M. A. Baum, Y. Benkler, A. J. Berinsky, K. M. Greenhill,
The Netherlands: Elsevier, 2012. F. Menczer, M. J. Metzger, B. Nyhan, G. Pennycook, D. Rothschild,
[32] B. Ruttenberg, C. Miles, L. Kellogg, V. Notani, M. Howard, C. LeDoux, M. Schudson, S. A. Sloman, C. R. Sunstein, E. A. Thorson, D. J. Watts,
A. Lakhotia, and A. Pfeffer, ‘‘Identifying shared software components and J. L. Zittrain, ‘‘The science of fake news,’’ Science, vol. 359, no. 6380,
to support malware forensics,’’ in Proc. Int. Conf. Detection Intrusions pp. 1094–1096, 2018.
Malware, Vulnerability Assessment. Cham, Switzerland: Springer, 2014, [57] H. Allcott and M. Gentzkow, ‘‘Social media and fake news in the 2016
pp. 21–40. election,’’ J. Econ. Perspect., vol. 31, no. 2, pp. 211–236, 2017.
[33] B. Carrier and E. H. Spafford, ‘‘An event-based digital forensic investi- [58] D. K. Citron and R. Chesney, Deep Fakes: A Looming Crisis for
gation framework,’’ in Proc. Digit. Forensic Research Workshop, 2004, National Security, Democracy and Privacy? Washington, DC, USA:
pp. 11–13. Lawfare, 2018.

184572 VOLUME 8, 2020


D. Jeong: AI Security Threat, Crime, and Forensics

[59] X. Bo, ‘‘Xinhua presents AI anchors at news agencies world congress,’’ [80] B. Li, Y. Wang, A. Singh, and Y. Vorobeychik, ‘‘Data poisoning attacks
Xinhua, Beijing, China, Tech. Rep., Jun. 2019. [Online]. Available: on factorization-based collaborative filtering,’’ in Proc. Adv. Neural Inf.
http://www.xinhuanet.com/english/2019-06/15/c_138146148.htm Process. Syst., 2016, pp. 1885–1893.
[60] D. Wang and P. Wang, ‘‘Offline dictionary attack on password authen- [81] M. Jagielski, A. Oprea, B. Biggio, C. Liu, C. Nita-Rotaru, and B. Li,
tication schemes using smart cards,’’ in Information Security. Cham, ‘‘Manipulating machine learning: Poisoning attacks and countermeasures
Switzerland: Springer, 2015, pp. 221–237. for regression learning,’’ in Proc. IEEE Symp. Secur. Privacy (SP),
[61] A. K. Kyaw, F. Sioquim, and J. Joseph, ‘‘Dictionary attack on wordpress: May 2018, pp. 19–35.
Security and forensic analysis,’’ in Proc. 2nd Int. Conf. Inf. Secur. Cyber [82] J. Zhang and C. Li, ‘‘Adversarial examples: Opportunities and chal-
Forensics (InfoSec), Nov. 2015, pp. 158–164. lenges,’’ IEEE Trans. Neural Netw. Learn. Syst., vol. 31, no. 7,
[62] S. Gupta, A. Singhal, and A. Kapoor, ‘‘A literature survey on social engi- pp. 2578–2593, Jul. 2019.
neering attacks: Phishing attack,’’ in Proc. Int. Conf. Comput., Commun. [83] K. R. Mopuri, U. Garg, and R. V. Babu, ‘‘Fast feature fool: A data
Autom. (ICCCA), Apr. 2016, pp. 537–540. independent approach to universal adversarial perturbations,’’ 2017,
[63] R. Russell, L. Kim, L. Hamilton, T. Lazovich, J. Harer, O. Ozdemir, arXiv:1707.05572. [Online]. Available: http://arxiv.org/abs/1707.05572
P. Ellingwood, and M. McConley, ‘‘Automated vulnerability detection in [84] F. Tramèr, N. Papernot, I. Goodfellow, D. Boneh, and P. McDaniel, ‘‘The
source code using deep representation learning,’’ in Proc. 17th IEEE Int. space of transferable adversarial examples,’’ 2017, arXiv:1704.03453.
Conf. Mach. Learn. Appl. (ICMLA), Dec. 2018, pp. 757–762. [Online]. Available: http://arxiv.org/abs/1704.03453
[64] G. Grieco, G. L. Grinblat, L. Uzal, S. Rawat, J. Feist, and [85] M. Sharif, S. Bhagavatula, L. Bauer, and M. K. Reiter, ‘‘Accessorize
L. Mounier, ‘‘Toward large-scale vulnerability discovery using machine to a crime: Real and stealthy attacks on state-of-the-art face recogni-
learning,’’ in Proc. 6th ACM Conf. Data Appl. Secur. Privacy, 2016, tion,’’ in Proc. ACM SIGSAC Conf. Comput. Commun. Secur., 2016,
pp. 85–96. pp. 1528–1540.
[65] S. M. Ghaffarian and H. R. Shahriari, ‘‘Software vulnerability analysis [86] Q. Liu, P. Li, W. Zhao, W. Cai, S. Yu, and V. C. M. Leung, ‘‘A
and discovery using machine-learning and data-mining techniques: A sur- survey on security threats and defensive techniques of machine learn-
vey,’’ ACM Comput. Surv., vol. 50, no. 4, pp. 1–36, 2017. ing: A data driven view,’’ IEEE Access, vol. 6, pp. 12103–12117,
[66] Z. Li, D. Zou, S. Xu, X. Ou, H. Jin, S. Wang, Z. Deng, and 2018.
Y. Zhong, ‘‘VulDeePecker: A deep learning-based system for vul- [87] S. Alfeld, X. Zhu, and P. Barford, ‘‘Data poisoning attacks against autore-
nerability detection,’’ 2018, arXiv:1801.01681. [Online]. Available: gressive models,’’ in Proc. 13th AAAI Conf. Artif. Intell., 2016.
http://arxiv.org/abs/1801.01681 [88] L. Tong, B. Li, C. Hajaj, C. Xiao, N. Zhang, and Y. Vorobeychik,
[67] H. Xue, S. Sun, G. Venkataramani, and T. Lan, ‘‘Machine learning-based ‘‘Improving robustness of {ML} classifiers against realizable evasion
analysis of program binaries: A comprehensive study,’’ IEEE Access, attacks using conserved features,’’ in Proc. 28th USENIX Secur. Symp.
vol. 7, pp. 65889–65912, 2019. (USENIX Secur.), 2019, pp. 285–302.
[68] H. Ashrafian, ‘‘AIonAI: A humanitarian law of artificial intelligence and [89] H. Dang, Y. Huang, and E.-C. Chang, ‘‘Evading classifiers by morphing in
robotics,’’ Sci. Eng. Ethics, vol. 21, no. 1, pp. 29–40, Feb. 2015. the dark,’’ in Proc. ACM SIGSAC Conf. Comput. Commun. Secur., 2017,
[69] P. Lin, K. Abney, and R. Jenkins, Robot Ethics 2.0: From Autonomous pp. 119–133.
Cars to Artificial Intelligence. London, U.K.: Oxford Univ. Press, 2017. [90] R. Shokri, M. Stronati, C. Song, and V. Shmatikov, ‘‘Membership infer-
[70] M. U. Scherer, ‘‘Regulating artificial intelligence systems: Risks, ence attacks against machine learning models,’’ in Proc. IEEE Symp.
challenges, competencies, and strategies,’’ Harv. JL Tech., vol. 29, Secur. Privacy (SP), May 2017, pp. 3–18.
no. 2, p. 353, 2015. [Online]. Available: https://heinonline.org/HOL/ [91] N. Papernot, P. McDaniel, A. Sinha, and M. P. Wellman, ‘‘SoK: Security
LandingPage?handle=hein.journals/hjlt29&div=15&id=&page= and privacy in machine learning,’’ in Proc. IEEE Eur. Symp. Secur.
[71] M. Cummings, ‘‘Artificial intelligence and the future of warfare,’’ Privacy (EuroS&P), Apr. 2018, pp. 399–414.
Chatham House Roy. Inst. Int. Affairs London, London, U.K., [92] I. J. Goodfellow, J. Shlens, and C. Szegedy, ‘‘Explaining and harness-
Tech. Rep., 2017. [Online]. Available: https://d1wqtxts1xzle7.cloudfront. ing adversarial examples,’’ 2014, arXiv:1412.6572. [Online]. Available:
net/59339185/2017-01-26-artificial-intelligence-future-warfare- http://arxiv.org/abs/1412.6572
cummings-final20190521-119589-196oqd3.pdf?1558436451=& [93] A. Nguyen, J. Yosinski, and J. Clune, ‘‘Deep neural networks are eas-
response-content-disposition=inline%3B+filename%3DArtificial_ ily fooled: High confidence predictions for unrecognizable images,’’
intelligence_future_warfare_c.pdf&Expires=1602336950&Signature= in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 2015,
HKEsP2BLOP-ejS4xEtWLmUDyK73alNdyoR6VKfwztl9UtE6JWYXe pp. 427–436.
drK5m70LyENBBBiz31Gh5us1fPYBUzisLn9oawuXphtS8n17G9m0p [94] K. Eykholt, I. Evtimov, E. Fernandes, B. Li, A. Rahmati, C. Xiao,
I4kNyL01baxyqdYSKrB7G-gYcmHY2D49fqW7L6Y7voswcghWv1gc A. Prakash, T. Kohno, and D. Song, ‘‘Robust physical-world attacks on
tqVq5vS8KprSY8zTb6XJCZXdBNjWxbNqpw2nLplzXG7gghignIFYj9 deep learning visual classification,’’ in Proc. IEEE Conf. Comput. Vis.
ncYGrXcVH6IV7KKYjpX9DfvE01dROHibDLS1Ixhj~ry5faZ6BZvPE Pattern Recognit., Jun. 2018, pp. 1625–1634.
JS5UWOVk-T0H91fUIURVW3Y2DRG5tqWWO2V8vJJt8DRFnQCb [95] K. Grosse, N. Papernot, P. Manoharan, M. Backes, and P. McDaniel,
of70RE9MlgWIgWNKcF04GVE3ul0ZoA__&Key-Pair-Id= ‘‘Adversarial examples for malware detection,’’ in Proc. Eur.
APKAJLOHF5GGSLRBV4ZA Symp. Res. Comput. Secur. Cham, Switzerland: Springer, 2017,
[72] A. Ilachinski, AI, Robots, and Swarms: Issues, Questions, and Recom- pp. 62–79.
mended Studies. Arlington County, VA, USA: CNA Corporation, 2017. [96] O. Suciu, S. E. Coull, and J. Johns, ‘‘Exploring adversarial examples
[73] D. S. Hoadley and N. J. Lucas, Artificial Intelligence and National in malware detection,’’ in Proc. IEEE Secur. Privacy Workshops (SPW),
Security. Washington, DC, USA: Congressional Research Service, 2018 May 2019, pp. 8–14.
[74] S. Jafarnejad, L. Codeca, W. Bronzi, R. Frank, and T. Engel, ‘‘A car [97] I. Corona, G. Giacinto, and F. Roli, ‘‘Adversarial attacks against intru-
hacking experiment: When connectivity meets vulnerability,’’ in Proc. sion detection systems: Taxonomy, solutions and open issues,’’ Inf. Sci.,
IEEE Globecom Workshops (GC Wkshps), Dec. 2015, pp. 1–6. vol. 239, pp. 201–225, Aug. 2013.
[75] F. Martinelli, F. Mercaldo, V. Nardone, and A. Santone, ‘‘Car hacking [98] N. Martins, J. M. Cruz, T. Cruz, and P. H. Abreu, ‘‘Adversarial machine
identification through fuzzy logic algorithms,’’ in Proc. IEEE Int. Conf. learning applied to intrusion and malware scenarios: A systematic
Fuzzy Syst. (FUZZ-IEEE), Jul. 2017, pp. 1–7. review,’’ IEEE Access, vol. 8, pp. 35403–35419, 2020.
[76] J. Clough, Principles of Cybercrime. Cambridge, U.K.: Cambridge Univ. [99] J. Hayes and O. Ohrimenko, ‘‘Contamination attacks and mitigation in
Press, 2015. multi-party machine learning,’’ in Proc. Adv. Neural Inf. Process. Syst.,
[77] M. Fredrikson, S. Jha, and T. Ristenpart, ‘‘Model inversion attacks 2018, pp. 6604–6615.
that exploit confidence information and basic countermeasures,’’ in [100] C. Catal and B. Diri, ‘‘Investigating the effect of dataset size, metrics sets,
Proc. 22nd ACM SIGSAC Conf. Comput. Commun. Secur., 2015, and feature selection techniques on software fault prediction problem,’’
pp. 1322–1333. Inf. Sci., vol. 179, no. 8, pp. 1040–1058, Mar. 2009.
[78] B. Biggio, ‘‘Machine learning under attack: Vulnerability exploitation [101] J. M. Alvarez, T. Gevers, Y. LeCun, and A. M. Lopez, ‘‘Road scene
and security measures,’’ in Proc. 4th ACM Workshop Inf. Hiding Mul- segmentation from a single image,’’ in Proc. Eur. Conf. Comput. Vis.
timedia Secur., 2016, pp. 1–2. Berlin, Germany: Springer, 2012, pp. 376–389.
[79] M. Zhao, B. An, W. Gao, and T. Zhang, ‘‘Efficient label contamina- [102] R. Zhang, S. A. Candra, K. Vetter, and A. Zakhor, ‘‘Sensor fusion for
tion attacks against black-box learning models,’’ in Proc. IJCAI, 2017, semantic segmentation of urban scenes,’’ in Proc. IEEE Int. Conf. Robot.
pp. 3945–3951. Autom. (ICRA), May 2015, pp. 1850–1857.

VOLUME 8, 2020 184573


D. Jeong: AI Security Threat, Crime, and Forensics

[103] S. D. Jain and K. Grauman, ‘‘Supervoxel-consistent foreground propa- [123] J. Kornblum, ‘‘Identifying almost identical files using context triggered
gation in video,’’ in Proc. Eur. Conf. Comput. Vis. Cham, Switzerland: piecewise hashing,’’ Digit. Invest., vol. 3, pp. 91–97, Sep. 2006.
Springer, 2014, pp. 656–671. [124] V. Roussev, ‘‘Data fingerprinting with similarity digests,’’ in Proc.
[104] L. Yi, V. G. Kim, D. Ceylan, I.-C. Shen, M. Yan, H. Su, C. Lu, Q. Huang, IFIP Int. Conf. Digit. Forensics. Berlin, Germany: Springer, 2010,
A. Sheffer, and L. Guibas, ‘‘A scalable active framework for region pp. 207–226.
annotation in 3D shape collections,’’ ACM Trans. Graph., vol. 35, no. 6, [125] V. Roussev, ‘‘An evaluation of forensic similarity hashes,’’ Digit. Invest.,
pp. 1–12, Nov. 2016. vol. 8, pp. S34–S41, Aug. 2011.
[105] P. Mohassel and Y. Zhang, ‘‘SecureML: A system for scalable privacy- [126] N. Carlini and D. Wagner, ‘‘Audio adversarial examples: Targeted attacks
preserving machine learning,’’ in Proc. IEEE Symp. Secur. Privacy (SP), on Speech-to-Text,’’ in Proc. IEEE Secur. Privacy Workshops (SPW),
May 2017, pp. 19–38. May 2018, pp. 1–7.
[106] Y. Mao, S. Yi, Q. Li, J. Feng, F. Xu, and S. Zhong, ‘‘A privacy-preserving [127] A. Athalye, N. Carlini, and D. Wagner, ‘‘Obfuscated gradients
deep learning approach for face recognition with edge computing,’’ in give a false sense of security: Circumventing defenses to adversar-
Proc. USENIX Workshop Hot Topics Edge Comput. (HotEdge), 2018, ial examples,’’ 2018, arXiv:1802.00420. [Online]. Available: http://
pp. 1–6. arxiv.org/abs/1802.00420
[107] M. Mohammadi, A. Al-Fuqaha, S. Sorour, and M. Guizani, ‘‘Deep learn- [128] T. Zheng, C. Chen, and K. Ren, ‘‘Distributionally adversarial attack,’’ in
ing for IoT big data and streaming analytics: A survey,’’ IEEE Commun. Proc. AAAI Conf. Artif. Intell., vol. 33, 2019, pp. 2253–2260.
Surveys Tuts., vol. 20, no. 4, pp. 2923–2960, 4th Quart., 2018. [129] N. Carlini and D. Wagner, ‘‘Adversarial examples are not easily detected:
[108] X. Fei, N. Shah, N. Verba, K.-M. Chao, V. Sanchez-Anguix, Bypassing ten detection methods,’’ in Proc. 10th ACM Workshop Artif.
J. Lewandowski, A. James, and Z. Usman, ‘‘CPS data streams analytics Intell. Secur., 2017, pp. 3–14.
based on machine learning for cloud and fog computing: A survey,’’ [130] K. Grosse, P. Manoharan, N. Papernot, M. Backes, and P. McDaniel,
Future Gener. Comput. Syst., vol. 90, pp. 435–450, Jan. 2019. ‘‘On the (Statistical) detection of adversarial examples,’’ 2017,
[109] N. Carlini and D. Wagner, ‘‘Towards evaluating the robustness of neu- arXiv:1702.06280. [Online]. Available: http://arxiv.org/abs/1702.06280
ral networks,’’ in Proc. IEEE Symp. Secur. Privacy (SP), May 2017, [131] X. Li and F. Li, ‘‘Adversarial examples detection in deep networks with
pp. 39–57. convolutional filter statistics,’’ in Proc. IEEE Int. Conf. Comput. Vis.,
[110] Q. Xiao, K. Li, D. Zhang, and W. Xu, ‘‘Security risks in deep learn- Oct. 2017, pp. 5764–5772.
ing implementations,’’ in Proc. IEEE Secur. Privacy Workshops (SPW), [132] R. Feinman, R. R. Curtin, S. Shintre, and A. B. Gardner, ‘‘Detecting
May 2018, pp. 123–128. adversarial samples from artifacts,’’ 2017, arXiv:1703.00410. [Online].
[111] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, Available: http://arxiv.org/abs/1703.00410
S. Ozair, A. Courville, and Y. Bengio, ‘‘Generative adversarial nets,’’ in [133] J. Hendrik Metzen, T. Genewein, V. Fischer, and B. Bischoff, ‘‘On
Proc. Adv. Neural Inf. Process. Syst., vol. 2014, pp. 2672–2680. detecting adversarial perturbations,’’ 2017, arXiv:1702.04267. [Online].
[112] A. Salem, A. Bhattacharya, M. Backes, M. Fritz, and Y. Zhang, Available: http://arxiv.org/abs/1702.04267
‘‘Updates-leak: Data set inference and reconstruction attacks in [134] H. Hosseini, Y. Chen, S. Kannan, B. Zhang, and R. Poovendran,
online learning,’’ 2019, arXiv:1904.01067. [Online]. Available: ‘‘Blocking transferability of adversarial examples in black-box
http://arxiv.org/abs/1904.01067 learning systems,’’ 2017, arXiv:1703.04318. [Online]. Available:
[113] N. Papernot, P. McDaniel, I. Goodfellow, S. Jha, Z. B. Celik, and http://arxiv.org/abs/1703.04318
A. Swami, ‘‘Practical black-box attacks against machine learning,’’ in [135] D. Meng and H. Chen, ‘‘MagNet: A two-pronged defense against adver-
Proc. ACM Asia Conf. Comput. Commun. Secur., Apr. 2017, pp. 506–519. sarial examples,’’ in Proc. ACM SIGSAC Conf. Comput. Commun. Secur.,
[114] R. Ahmed and R. V. Dharaskar, ‘‘Study of mobile botnets: An analysis 2017, pp. 135–147.
from the perspective of efficient generalized forensics framework for [136] S. Tian, G. Yang, and Y. Cai, ‘‘Detecting adversarial examples through
mobile devices,’’ in Proc. Int. J. Control Automat. Proc. Nat. Conf. Innov. image transformation,’’ in Proc. 32nd AAAI Conf. Artif. Intell., 2018.
Paradigms Eng. Technol. (NCIPET), 2012, pp. 5–8. [137] N. Carlini and H. Farid, ‘‘Evading deepfake-image detectors with white-
[115] S. Mohtasebi and A. Dehghantanha, ‘‘Towards a unified forensic inves- and black-box attacks,’’ in Proc. IEEE Conf. Comput. Vis. Pattern Recog-
tigation framework of smartphones,’’ Proc. Int. J. Comput. Theory Eng., nit. Workshops, Jun. 2020, pp. 658–659.
vol. 5, no. 2, p. 351, 2013. [138] E. Eilam, Reversing: Secrets of Reverse Engineering. Hoboken, NJ, USA:
[116] S. Simou, C. Kalloniatis, S. Gritzalis, and H. Mouratidis, ‘‘A survey on Wiley, 2011.
cloud forensics challenges and solutions,’’ Secur. Commun. Netw., vol. 9,
no. 18, pp. 6285–6314, Dec. 2016.
[117] S. Simou, C. Kalloniatis, S. Gritzalis, and V. Katos, ‘‘A framework for
designing cloud forensic-enabled services (CFeS),’’ Requirements Eng.,
vol. 24, no. 3, pp. 403–430, Sep. 2019.
[118] J. Brownlee. (2020). Basic Concepts in Machine Learning. [Online].
Available: https://machinelearningmastery.com/basic-concepts-in-
machine-learning/
[119] L. Luo, J. Ming, D. Wu, P. Liu, and S. Zhu, ‘‘Semantics-based
obfuscation-resilient binary code similarity comparison with applications
to software and algorithm plagiarism detection,’’ IEEE Trans. Softw. Eng.,
vol. 43, no. 12, pp. 1157–1177, Dec. 2017. DOOWON JEONG received the B.S. degree from
[120] F. Benedetti, D. Beneventano, S. Bergamaschi, and G. Simonini, ‘‘Com- the Division of Industrial Management Engineer-
puting inter-document similarity with context semantic analysis,’’ Inf. ing, Korea University, in 2011, and the Ph.D.
Syst., vol. 80, pp. 136–147, Feb. 2019. degree from the Graduate School of Information
[121] O. Mayer and M. C. Stamm, ‘‘Learned forensic source similarity for Security, Korea University, in 2019. He is currently
unknown camera models,’’ in Proc. IEEE Int. Conf. Acoust., Speech an Assistant Professor with the College of Police
Signal Process. (ICASSP), Apr. 2018, pp. 2012–2016.
[122] M. Ferguson, S. Jeong, K. H. Law, S. Levitan, A. Narayanan,
and Criminal Justice, Dongguk University. His
R. Burkhardt, T. Jena, and Y.-T. T. Lee, ‘‘A standardized representation of research interests include digital forensics, infor-
convolutional neural networks for reliable deployment of machine learn- mation security, artificial intelligence, and digital
ing models in the manufacturing industry,’’ in Proc. Int. Design Eng. Tech. profiling.
Conf. Comput. Inf. Eng. Conf., vol. 59179, 2019, Art. no. V001T02A005.

184574 VOLUME 8, 2020

You might also like