Deepfakes: Risks and Opportunities in Marketing
Deepfakes: Risks and Opportunities in Marketing
Deepfakes:
Deceptions, Mitigations, and Opportunities
Mekhail Mustak1,2
1
Department of Marketing and Sales Management, IESEG School of Management,
Univ. Lille, CNRS, UMR 9221 -LEM - Lille Economie Management, France
2
Department of Marketing & International Business Turku School of
Economics, University of Turku, Finland
e-mail: [email protected]
Joni Salminen
University of Vaasa, Finland
Matti Mäntymäki
Turku School of Economics, Finland
Arafat Rahman
Hanken School of Economics, Finland
Yogesh K. Dwivedi
School of Management, Swansea University, Bay Campus, Fabian Bay, Swansea,
UK
Foundation for Economic Education, Finland) for financial support towards this
research.
ABSTRACT
Deepfakes—artificial but hyper-realistic video, audio, and images created by algorithms—are
one of the latest technological developments in artificial intelligence. Amplified by the speed
and scope of social media, they can quickly reach millions of people and result in a wide range
of marketplace deceptions. However, extant understandings of deepfakes’ implications in the
marketplace are limited and fragmented. Against this background, we develop insights into the
significance of deepfakes for firms and consumers—the threats they pose, how to mitigate
those threats, and the opportunities they present. Our findings indicate that the main risks to
firms include damage to image, reputation, and trustworthiness and the rapid obsolescence of
2
existing technologies. However, consumers may also suffer blackmail, bullying, defamation,
harassment, identity theft, intimidation, and revenge porn. We then accumulate and present
knowledge on the strategies and mechanisms to safeguard against deepfake-based marketplace
deception. Furthermore, we uncover and report the various legitimate opportunities offered by
this new technology. Finally, we present an agenda for future research in this emergent and
highly critical area.
Keywords: Deepfake; fake photo; fake video; artificial intelligence; machine learning;
deception; opportunities; threats; challenges; protection; marketing
3
1 INTRODUCTION
The “successful” moon mission was a hoax! The “truth” is the Apollo 11 astronauts actually
never returned from the moon. In an incredibly realistic video, the then president of the United
States, Richard Nixon, delivered a televised speech to the nation in a gloomy voice: “Fate has
ordained that the men who went to the moon to explore in peace will stay on the moon to rest
in peace!” A sad day for humanity! Although the Apollo 11 mission was successful in reality,
this “deepfake” video1 was created by the MIT Center for Advanced Virtuality to generate
public awareness of the dangers of this emerging artificial intelligence (AI)-based technology.
In the words of Francesca Panetta, the Project Co-Lead and XR Creative Director:
“We hope that our work will spark critical awareness among the public. We want them
to be alert to what is possible with today’s technology (...) and to be ready to question
what they see and hear as we enter a future fraught with challenges over the question
of truth.”
Deepfakes are digitally manipulated synthetic media content (e.g., videos, images, sound
clips) in which people are shown to do or say something that never existed or happened in the
real world (Boush et al., 2015; Chesney & Citron, 2019; Westerlund, 2019). Advances in AI—
particularly machine learning (ML) and deep neural networks (DNNs)—have contributed to
the development of deepfakes (Chesney & Citron, 2019; Dwivedi et al., 2021; Kietzmann et
al., 2020; Mirsky & Lee, 2021). These look highly credible and “true to life” to the extent that
distinguishing them from authentic media can be very challenging for a human (see Figure 1).
Thus, they can be used for the purpose of widespread marketplace deception, with varied
ramifications for both firms and consumers (Europol, 2022; Luca & Zervas, 2016). In fact, a
recent study by scientists from University College London ranks fake audio or video content
1 https://www.youtube.com/watch?v=2rkQn-43ixs
4
as the most worrisome use of AI in terms of its potential applications for crime or terrorism
(Caldwell et al., 2020). But, simultaneously, this emerging technology has the potential to bring
forth major business opportunities for content creation and engagement (Etienne, 2021; Farish,
consumer research and marketing (Boush et al., 2015; Darke & Ritchie, 2007; Ho et al., 2016).
In general, deception refers to a deliberate attempt or act to present others with false or omitted
information with the aim of creating a belief that the communicator considers false (Darke &
Ritchie, 2007; Ludwig et al., 2016; Xiao & Benbasat, 2011). Thus, it is an intentional
manipulation of information to create a false belief in others’ minds (i.e., deceiving parties), all
of which can be further increased through deepfakes and hurt consumers and firms alike (Xiao
& Benbasat, 2011). Deception permeates the marketplace, harms health, welfare, and financial
For example, a fake video of a CEO admitting the company has been charged with a large
regulatory fine (or class-action lawsuit) could cause severe damage, with a crash in the stock
value of the company being one of the first negative consequences. These types of attacks have
already begun to occur. According to The Wall Street Journal (Stupp, 2019), in one high-profile
case, cybercriminals used “deepfake phishing” to deceive the CEO of a UK energy company
into transferring $243,000 into their account. Using AI-based voice spoofing software, the
criminals successfully impersonated the head of the firm’s parent company, deceiving the CEO
into believing he was speaking with his boss. The cybersecurity organization Symantec has
stated that it encountered at least three examples of deepfake-based fraud in 2019, resulting in
millions of dollars being lost (Zakrzewski, 2019). Moreover, consumers are susceptible to
blackmail, intimidation, sabotage, harassment, defamation, revenge porn, identity theft, and
5
bullying (Chesney & Citron, 2019; Cross, 2022; Europol, 2022; Fido et al., 2022; Karasavva
Yet at the same time, this emerging technology also carries positive potential through
different forms of commercialization (Johnson & Diakopoulos, 2021; Maksutov et al., 2020).
Deepfakes may even help change or innovate business models (Kietzmann et al., 2020). The
opportunities pertaining to deepfakes are becoming even more relevant as consumers start
spending more time in virtual worlds, which will foreseeably attract more attention and
investment from firms across the board. For example, Facebook has changed its name to Meta
and pursuing a virtual reality world called Metaverse, in which the company is purported to
invest 10 billion dollars in the fiscal year of 2021 alone.2 This virtual world will largely be
composed of deepfake objects. Thus, this latest technology will usher in new opportunities, as
well as new dangers. This dualistic nature is why, in the present article, we investigate the risks
and opportunities of deepfakes, which are virtually unexplored in the present business
literature.
Another critical factor making deepfakes relevant is their dissemination via the internet
and social media—both of which have become integral to people’s personal and professional
lives, allowing consumers to access easy-to-use platforms for real-time discussions, ideological
expression, information dissemination, and the sharing of emotions and sentiments (Perse &
Lambe, 2016). Consequently, the scale, volume, and distribution speed of deepfakes, combined
with the increasing pervasiveness of digital technologies in all areas of society, will have
profound positive and negative implications in the marketplace (Kietzmann et al., 2020;
Westerlund, 2019).
2 https://www.cbsnews.com/news/facebook-earnings-report-2021-q3-metaverse/
6
However, as deepfakes are an emergent technology and complex in nature (Chesney &
Citron, 2019; Dwivedi et al., 2021; Kietzmann et al., 2020; Westerlund, 2019), the current
understanding of their implications is scattered, sparse, and nascent (Botha & Pieterse, 2020;
Chesney & Citron, 2019; Kietzmann et al., 2020). As extant literature only offers anecdotal
and disparate indications related to the possibilities of deepfakes for firms and consumers
(Chesney & Citron, 2019; Vimalkumar et al., 2021; Wagner & Blewer, 2019), there is a lack
opportunities they present for both companies and consumers (Chesney & Citron, 2019;
To date, marketplace deception has been primarily investigated from the consumer
perspective, with a heavy emphasis on how it affects consumers (Taylor, 2021; Xie et al.,
2020). The effects of deepfakes on businesses have received scant attention, despite the fact
that researchers have noted firms are not immune to their effects (Chadderton & Croft, 2006;
Xie et al., 2020). Moreover, deepfakes have a legitimate potential to create commercial
opportunities, distinguishing them further from other forms of deception such as fake reviews
or opinion spam that only produce adverse effects (Johnson & Diakopoulos, 2021; Kietzmann
et al., 2020; Malbon, 2013). Consequently, both consumers and firms must develop their
understanding and avoidance capabilities of deepfake deception, mitigate the harm deepfakes
can create, and enjoy the opportunities they may offer (Boush et al., 2015; Taylor, 2021).
Against this background, the purpose of this study is to generate a holistic understanding
of deepfakes vis-à-vis marketplace deception and the potential opportunities they offer. More
deepfakes?
7
Through the application of an integrative literature review (ILR; Toronto & Remington,
literature from multiple research streams with footprints in deepfake research, including
understanding of deepfakes in terms of marketplace deception for firms and consumers (van
Heerde et al., 2021). We also accumulate and present the protection mechanisms from their
harmful effects, offering insights into the legitimate opportunities presented by this emerging
technology.
2 CONCEPTUAL UNDERPINNINGS
interactions between business entities, marketers, consumers, and any other party seeking to
gain benefit in an illegal or unethical manner (Boush et al., 2015). Such deceptions may include
information overload, display of false emotions in sales and service delivery situations, brand
mimicry, and lying about product features and usage outcomes (Boush et al., 2015; Mechner,
The early academic literature in this area focused mainly on deceptions through
advertising and marketing communications. As early as 1975, Gardner (p. 42) posited the
following: “If an advertisement (or advertising campaign) leaves the consumer with an
8
impression(s) and/or belief(s) different from what would normally be expected if the consumer
had reasonable knowledge, and that impression(s) and/or belief(s) is factually untrue or
potentially misleading, then deception is said to exist.” This argument emphasizes how a
marketer might take advantage of consumers by disseminating false information. Given that
reasonable to presume that the false information in question is created with the intent of
profiting at the expense of consumers (Chadderton & Croft, 2006; Xie et al., 2020).
perceptions about advertising and marketing in general, as well as their skepticism of future
In the context of e-commerce, Xiao and Benbasat (2011) argue that product-related
information content, and information presentation. For example, an e-commerce platform can
about its contents on its packaging (Román, 2010; Xiao & Benbasat, 2011). Moreover, artificial
Similarly, because buyers rely on product reviews when making online purchases,
businesses can fabricate and distribute fake product reviews to sway buyers’ selections
(Malbon, 2013; Zhao et al., 2013). Such forms of marketplace deception (also known as
Human-generated fake reviews may be sponsored by firms through false online consumer
identities (Malbon, 2013; Salminen, Kandpal, et al., 2022). Computer-generated fake reviews
use text-generation algorithms to automate fake review creation (Salminen, Mustak, et al.,
9
2022). Irrespective of the mechanisms by which the deceptions are created and distributed, the
The use of synthetic media in marketplace deception differs from traditional deception
in several ways (Giansiracusa, 2021; Karnouskos, 2020; Mechner, 2010; Mirsky & Lee, 2021;
Van Huynh et al., 2021). Synthetic media is an umbrella term for the artificial creation or
Information Services, 2021; synthesia, 2020; Taylor, 2021). Today, synthetic media include
music composed by AI, text generation, imagery and video generation, and voice synthesis
(CB Information Services, 2021; Karnouskos, 2020). Among these various forms, deepfakes
are by far the most prevalent (Chesney & Citron, 2019; Zotov et al., 2020). The term
“deepfake” was coined in late 2017 as a portmanteau of the terms “deep learning” and “fake.”
information and/or presenting false information as true (Ott et al., 2013; Taylor, 2021). The
more recent technology-based forms, such as opinion spam and fake reviews, are generally
textual in nature or may include out-of-context but genuine photographs (Lappas, 2012;
Malbon, 2013; Ott et al., 2013). They are also context- and purpose-specific (Lappas, 2012).
However, the introduction of synthetic media takes marketplace deception to a whole new level
due to its versatile nature and higher appeal to human cognitive functions (Taylor, 2021;
Wagner & Blewer, 2019). These media are also much more appealing and lifelike, with broad
applications in a variety of contexts, all of which make protection from them significantly more
The presence of visible or nonverbal clues (e.g., facial expressions, eye contact) for
recent technological breakthroughs (Maksutov et al., 2020; Ramadhani & Munir, 2020; Tong
et al., 2020), thus heightening the degree of marketplace deception to unprecedented levels (Ho
et al., 2016; Taylor, 2021). Moreover, as computer-mediated deception has previously been
applied to language-action cues such as verbal and nonverbal immediacy (in addition to the
superfluous use of words, structured messages, or argument development) and has adapted or
to evaluate the truthfulness of incoming information (Ho et al., 2016; Ludwig et al., 2016).
Thus, the recent introduction of deepfakes makes marketplace deception even more damaging,
as hyper-realistic videos and other multimedia deepfakes are extremely difficult to differentiate
from reality (Boush et al., 2015; Giansiracusa, 2021; Tahir et al., 2021; Zhao et al., 2020).
3 METHODOLOGY
The ILR approach that we have applied in this study is “a form of research that reviews,
critiques, and synthesizes representative literature on a topic in an integrated way such that
new frameworks and perspectives on the topic are generated” (Torraco, 2005, p. 356). It is
considered a particular form of a systematic literature review (SLR; Toronto & Remington,
2020). However, the SLR approach tends to narrowly focus on a specific topic or type of study
(Booth et al., 2016). In contrast, the aim of ILR is to be phenomenologically inclusive, placing
less emphasis on the type of study, venue, and discipline (Toronto & Remington, 2020;
Torraco, 2016).
Our adoption of the ILR approach is influenced by the inadequacy of existing research
on deepfakes in the business and marketing domains. As relevant research in other fields, such
as computer science and political science, is more developed than in the business domain, it is
worth pursuing knowledge generated in those fields while analyzing any ramifications it may
have in a marketing context. Thus, the ILR approach allowed us to integrate primary
11
knowledge from various research streams, generating coherent and insightful answers to our
research questions (Toronto & Remington, 2020; Torraco, 2016). As described by Tranfield et
al. (2003), and following their adaptation by Sivarajah et al. (2017), we applied a three-phase
Phase I – Planning the Review Process: Identifying the critical phenomenon of deepfakes
analytical framework, coding and synthesizing the relevant information, and developing the
conceptual framework.
Phase III – Reporting and Dissemination of the Research Results: Descriptive reporting
of results according to the research questions, discussing the findings further, drawing
implications from the study, and identifying future research avenues (Sivarajah et al., 2017).
defining the research aim and scope—has already been presented in the introduction section of
this article. Next, we offer a description of “Phase II” in detail. “Phase III”—reporting and
To identify relevant literature, we used three academic databases: Web of Science (WoS),
ACM Digital Library, and IEEE Xplore. As a generic database, WoS is the most
comprehensive, containing over 12,000 high-impact journals and scientific articles from over
3,300 publishers. The ACM Digital Library and IEEE Xplore databases focus on technical
disciplines. When combined, these three databases offer extensive and balanced coverage of
We conducted detailed searches in each of the three databases. Given the nascent stage
of deepfake research, we did not want to pre-limit the searches with highly specific keywords
that could result in the omission of important papers. Rather, to identify a wide range of
publications to illuminate deepfakes and their implications, we used only the keywords
“deepfake*” and “deep fake*” (* denotes plural forms) and manually identified any associated
papers. We identified a total of 798 publications (WoS: 362; ACM Digital Library: 177; IEEE
Xplore: 259). For all publications, we recorded the title, author(s), publication outlet, year of
We then examined the publications individually to check whether they fit within the
scope of our study. In doing so, we read the title and abstract—and, if necessary, the
introduction and conclusion—of each publication to decide whether they should be included
in or excluded from our pool of reviewed literature (Mustak et al., 2016). First, we included
as they tend to present the most up-to-date and established knowledge across scientific
disciplines (Mustak et al., 2016). We excluded other forms of publications, such as opinion
pieces. Second, for papers present in multiple databases, we kept only one record per paper and
excluded other ones. For example, the paper titled “Deepfake Portraits in Augmented Reality
for Museum Exhibits” by Nathan Wynn, Kyle Johnsen, and Nick Gonzalez (2021) was present
in both WoS and IEEE Explore. We kept one record for it and removed the other one. We also
excluded papers with title/abstract/keywords indexed in English in the databases when the
Finally, from this pool of publications, we selected those contributing to the aims of this
study. Here, only publications useful in answering any of our three research questions were
retained. The rest were discarded. We preferred articles with literature reviews or clear
conceptual frameworks (Torraco, 2016), as these tend to summarize previous research rather
13
than focusing on a specific aspect of the phenomenon. We chose this “top-to-bottom” approach
summaries from multiple fields (Toronto & Remington, 2020; Torraco, 2016). In addition, we
included empirical studies that clearly articulated implications for either consumers (users) or
firms (organizations). Our final list included 74 publications (WoS: 42; ACM Digital Library:
14; IEEE Xplore: 18). The details of these papers—including source database, title, authors,
addressed, and key findings—are available in the appendix (Supplementary material, Table 3).
The 74 publications reviewed in the current study were published in 57 different outlets,
indicating that the topic currently attracts the attention of diverse publication outlets and is
highly multidisciplinary. In our pool of reviewed literature, only the following outlets
published more than one paper on deepfakes: Convergence: The International Journal of
Research into New Media Technologies (4 papers); Cyberpsychology, Behavior, and Social
and IEEE Transactions on Technology and Society (2 papers). As illustrated in Figure 3, the
first paper was published in 2017, and there were none in 2018. But the number of publications
has increased significantly since 2019, providing a clear indication of the topic’s mounting
research significance. Simultaneously, the dotted line in Figure 3 represents the Google
popularity index value (which ranges from 0 to 100, as determined by Google Trends),
indicating that both public and academic interest in deepfakes is growing rapidly.
framework with specific questions to address the goals of the current research in a coherent
14
and holistic manner, as suggested in previous methodological literature (Toronto & Remington,
2020; Torraco, 2016). From the research questions, we derived specific analytical questions
When analyzing the articles, we marked any text related to our analytical questions using
short and intuitive codes (Toronto & Remington, 2020). After coding, we categorized the codes
and associated texts based on their commonalities in relation to the analytical questions. We
then read and analyzed them thoroughly to elucidate appropriate answers. Once we generated
answers for each of the AQs, we grouped them according to our RQs. We then read and
analyzed the grouped answers again to check whether they coherently addressed the RQs
(Torraco, 2016). We then discussed the findings among the research team, critically examined
any disagreements in terms of interpretations, corrected any anomalies, and produced a set of
answers to the research questions on which all researchers agreed (Toronto & Remington,
4 FINDINGS
framework to capture the deepfake phenomenon in the context of marketplace deception and
the opportunities it offers (Figure 4). The framework permits capturing an overview of the
conceptualize that this emergent and highly potent technology is dualistic in nature, thus posing
radical threats and opportunities for innovation to companies and consumers. The dotted arrows
representing threats in Figure 4 indicate that, according to our findings, the application of
existing protection strategies and mechanisms does not mitigate the harmful effects of
15
deepfakes in a comprehensive manner and offers only partial protection. Some harmful effects
may still reach companies and consumers. The dotted arrows on the right suggest that the
positive and negative effects of deepfakes do not necessarily remain only within the spheres of
companies or consumers. Rather, they often carry spillover effects where effects on companies
In line with the conceptual framework and in response to our RQs, next, we first present
the various possible marketplace deceptions associated with deepfakes. Then, we analyze
existing knowledge regarding how firms and consumers can safeguard themselves against their
malicious effects. Following that, we identify and report the potential opportunities presented
by this emerging technology. Throughout our findings, we offer several examples illustrating
these aspects, thus permitting establishing the theory-practice links, i.e, what it means for the
The existing literature on marketplace deception focuses primarily on consumers who are the
victims of deceptive actions and behaviors (Boush et al., 2015; Ludwig et al., 2016). However,
our study demonstrates that in comparison to traditional deceptions, the scope of threats posed
by deepfakes is significantly greater, as businesses can be harmed in many ways (Johnson &
Diakopoulos, 2021; Kietzmann et al., 2020; Zakrzewski, 2019). These include derogatory
activities such as defamation and sabotage, as well as damage to a firm’s image, reputation,
and trustworthiness (Botha & Pieterse, 2020; Schwartz, 2018; Westerlund, 2019). The
and sabotage, which can threaten a company’s reputation and brand image through marketplace
16
deception, resulting in a loss of trust from customers and other stakeholder groups (Di
Domenico & Visentin, 2020; Rubin, 2019). Firms can be viciously harmed (e.g., through
An example of harm to a company’s reputation and brand image is where a firm’s senior
(Westerlund, 2019). The screenshot of the video we presented at the beginning of this paper
(Figure 1) is another example. In a film created by artists Bill Posters and Daniel Howe—and
in collaboration with the advertising business Canny—Zuckerberg can be seen sitting at a desk
and allegedly delivering a menacing speech on Facebook’s power (Eadicicco, 2019): “Imagine
this for a second: One man, with total control of billions of people’s stolen data, all their secrets,
their lives, their futures,” Zuckerberg’s likeness says. “I owe it all to Spectre. Spectre showed
me that whoever controls the data controls the future.” Considering the controversies
surrounding Facebook over the last few years—for example, the Cambridge Analytica scandal
(Confessore, 2018)—a deepfake video like this carries the potential to cause severe damage to
Another example is a fake news report in which the CEO of Pepsi (Indra Nooyi) was
deliberately misquoted as saying that Donald Trump supporters should “take their business
elsewhere.” This prompted boycott calls and a 3.75 percent decline in PepsiCo’s stock price.
Thus, misinformation can result in negative financial consequences and diminished brand
perceptions (Johnson & Diakopoulos, 2021; Wagner & Blewer, 2019; Zakrzewski, 2019).
Similarly, videos that purposefully inflate earnings estimates can depress stock prices or harm
compel managers to pay a fee to avoid deepfakes being shared (Kietzmann et al., 2020).
17
Deepfake technology may damage firms of different capacities and profiles. For
example, competitors can use deepfakes to deceive a firm’s customers or stoke negative public
opinions or confusion about a rival’s products, brands, and services (Zannettou et al., 2019).
Additionally, deepfakes can be used to harm a business by creating fake reviews of its products
and services. For instance, in a virtual brand community (VBC), the emergence of false but
highly realistic deepfake-based reviews (particularly negative reviews) can affect the
interactions of individuals with other VBC members as they begin to lose trust in the group,
weakening their interest in interacting with other members (Feng et al., 2018). Additionally, if
information from) consumers, this may increase levels of consumer distrust (Malbon, 2013;
Wu et al., 2020).
Along with harming a firm’s image, reputation, and trustworthiness through various
forms of marketplace deception, deepfake technology has the potential to harm business
effectively rendering them obsolete (Kietzmann et al., 2020). However, the opposite situation
also exists, where such technologies may be used to enhance these industries, as we discuss in
Section 4.3.1. For instance, the dubbing and re-voicing industry, which previously translated
films to ensure words in another language matched the actor’s original lip movements, is at
risk of becoming extinct due to the advancing technological ability to change languages and
lips accordingly (Giansiracusa, 2021; Johnson & Diakopoulos, 2021; Zakrzewski, 2019).
Deception via deepfakes can have major negative consequences for consumers that extend
beyond the boundaries of firm-customer interactions, as they can be used for a variety of
malicious purposes (Whittaker et al., 2020). According to the first report by Europol (the
European Union Agency for Law Enforcement Cooperation) on deepfakes, these threats
include but are not limited to harassing or humiliating individuals online, perpetrating extortion
and fraud, facilitating document fraud, falsifying online identities and fooling “know your
financial markets, distributing disinformation and manipulating public opinion, supporting the
narratives of extremist or terrorist groups, stoking social unrest, and political polarization
Consumers’ vulnerability, their chances of being exploited by deepfakes, and their lack
of protection are heightened due to humans’ limited cognitive abilities and ideological
prejudices (Sharma et al., 2019). For instance, a lack of media literacy or familiarity with
deceptive information (Köbis et al., 2021; Rubin, 2019), stressing a new form of the digital
divide where consumers lacking the cognitive skills to detect deepfakes are at a structural
disadvantage to those possessing such skills. In other words, less sophisticated consumers can
more easily fall prey to deepfake deception. For instance, more than 70% of people in the UK
be exposed to deepfake technologies and further propagate digital misinformation (Nygren &
Guath, 2019). For instance, the website “Random Face Generator (This Person Does Not
Exist)” uses AI to artificially generate fake portraits of people who do not exist in reality. Figure
19
5 shows a few examples of such portraits, but not everyone is able to guess that AI could
generate such realistic but non-existent faces in less than a couple of seconds. The AI face
According to the website, “AI is so developed that 90% of fakes are not recognized by an
ordinary person and 50% are not recognized by an experienced photographer” (Random Face
Generator, 2022).
Furthermore, existing research indicates that certain demographic groups are more
susceptible to fake content. According to Guess et al. (2019), Facebook users over the age of
65 shared nearly seven times as many articles from fake news domains as the youngest age
cohort. Moreover, the literature suggests online misinformation is associated with the third-
person effect (Jang & Kim, 2018). The central tenet of the third-person effect is that people
tend to overestimate the influence of media (e.g., deepfakes) on other people’s attitudes and
behaviors while underestimating its effect on their own behaviors (Jang & Kim, 2018;
uncertainty in the marketplace and mislead consumers, resulting in their mistrust of businesses
and psychological discomfort (Botha & Pieterse, 2020; Giansiracusa, 2021; Zakrzewski, 2019).
This, in turn, can erode consumers’ purchasing intentions and impair the accuracy of helpful
deepfake technologies that can generate human-like narratives using natural language
processing (NLP) such as GPT-3 (a text-generation model), it is reasonable to expect that the
Kietzmann et al. (2020) argue that deepfakes make it more difficult for people to
respond to personalized advertisements. For instance, weighing the perceived value of highly
consumers to strike a balance between the personalization of incoming data from deepfakes
and the extent to which they compromise privacy, which can be highly challenging.
deepfake technologies may be used to launch inherently disruptive campaigns against such
virtual communities, members of which would likely regard the message as truthful because of
the perceived parallels between the message and their embraced ideology.
Marketplace deceptions through deepfakes can take forms and shapes beyond those of
firm-customer transactions. For instance, such deceptions might have a detrimental effect on
anyone looking for employment (Chesney & Citron, 2019). According to a recent report from
Microsoft (Burt & Horvitz, 2020), more than 90% of employers use search results to make
decisions about applicants. However, these results have a negative impact in over 77% of cases,
discovered during these searches. The reasons for these findings are rather evident, and hiring
candidates who are not stigmatized by perceived negative online reputations is less risky. In
these instances, creating compromising photographs and videos of a person and making them
publicly available on the internet will significantly diminish that person’s employment
prospects. This simultaneously hurts employers, as they risk missing out on potential talent.
Beyond employment, various intelligence agencies have expressed concern that by propagating
political misinformation and meddling with election campaigns, deepfakes have implications
for national security (Europol, 2022; Westerlund, 2019), affecting consumers’ ability to stay
The magnitude of the threat posed by deepfakes in terms of marketplace deception and
Next, we offer our findings in this regard. Important to note is that even though we present the
protection mechanisms for firms and consumers separately for the ease of presentation and
reporting, they are not mutually exclusive (Chesney & Citron, 2019; Europol, 2022; Farish,
2020; Kirchengast, 2020). Thus, protecting firms from deepfakes often means malicious effects
do not spill over to their consumers and vice versa (Vizoso et al., 2021).
Extant studies primarily assume that the application of legal means is the primary—and often
Citron, 2019; Langa, 2021; Ray, 2021). However, our analysis clearly shows that it is extremely
difficult to protect firms and consumers from the malicious effects of deepfakes through legal
means alone. Rather, to address the concerns posed by deepfakes, three distinct but interrelated
sorts of protection mechanisms—market, circulation, and technical, along with their legal
responses—are needed (Chesney & Citron, 2019; Langa, 2021; Ray, 2021).
For firms, market responses to protect themselves include the mechanisms and methods
they can develop and implement to educate consumers about their products, brands, and
services, helping them identify firm-sponsored and credible sources of information (Rubin,
2019). Investments in corporate social responsibility initiatives for improving public media
literacy will benefit brands and the marketplace as a whole (Bulger & Davison, 2018). Such a
strategy aims to develop consumer information, media literacy, critical thinking, and evaluation
skills that can be applied to assess the credibility and facticity of incoming information or news
(Bulger & Davison, 2018; De Paor & Heravi, 2020). Notley and Dezuanni (2019) lamented
that designing information literacy interventions requires a broader disciplinary approach than
22
education alone and that contributions from economics, social psychology, and legal studies
are also required. Furthermore, in designing strategies for improving the deception awareness
information for consumers to use when evaluating content online (Lee & Shin, 2021). Opinion-
reinforcing information is that which confirms or validates existing beliefs or opinions, whereas
In the market, firms can also take advantage of online brand communities to counter
marketplace deception through deepfakes (Wang et al., 2019). Such strategies include
interacting with online communities that may generate deepfake content, thereby avoiding
actions that could render firms vulnerable to deepfake attacks (Giansiracusa, 2021; Johnson &
Diakopoulos, 2021; Taylor, 2021; Wagner & Blewer, 2019). In addition, resources could be
gathered from user credibility networks, expert group domains, and user ratings to verify and
develop the credibility of information being circulated via online channels (Meel &
Vishwakarma, 2020). Similarly, firms could devise strategies for managing consumer
interactions and feedback to foster protective behaviors within the brand community in
response to the reputational dangers posed by deepfakes (Di Domenico & Visentin, 2020).
Thus, by collaborating with influential real-life figures and using deepfake technology, firms
can develop so-called online good nodes (approved artificial accounts of real people) that can
2019).
Limiting or strictly regulating the circulation of deepfakes can offer further protection
from their potential negative impacts. An outright ban on posting them on social media
platforms is also taking place. For instance, TikTok is working on prohibiting “synthetic or
manipulated content that misleads users by distorting the truth of events in a way that could
23
cause harm” by updating its community guidelines (TikTok, 2019). Reddit has updated its
policy around impersonation and “does not allow content that impersonates individuals or
entities in a misleading or deceptive manner” (Reddit, 2020). YouTube has an existing ban for
manipulated media, which it defines as follows: “Video content that has been technically
manipulated (beyond clips taken out of context) to fabricate events where there’s a serious risk
of egregious harm” (YouTube, 2022). However, because many of these rules contain
subjectively interpretable terms such as “may cause harm,” “misleading or deceptive,” and
“serious risk,” they may have loopholes that can be exploited by unscrupulous actors.
developing and producing deepfakes. As an example, Google has banned the training of
Jupyter notebook service that requires no configuration and provides free access to
computational resources, including GPUs (Anderson, 2022). Further research and development
(R&D) investments in deepfake detection technologies and their successful deployment are
also critical (Pu et al., 2021; Zotov et al., 2020). In making such investments, companies can
use algorithm-based, computational detection techniques such as support vector machines and
deep learning for detecting and countering the content-, context-, and domain-dependent
features of deepfakes (Maksutov et al., 2020; Zotov et al., 2020). For example, Microsoft has
introduced the Microsoft Video Authenticator, which can analyze a still image or video to
determine the likelihood it has been intentionally altered. However, it must be noted that these
owing to the fast pace of improvement in generating synthetic media (Johnson & Diakopoulos,
2021; Ramadhani & Munir, 2020). For instance, if a method is reliant on the detection of an
abnormal reflection of light in the eyes of the synthetic person, the adversarial network-based
deep learning algorithms will quickly learn how to overcome such a shortcoming (Ludwig et
24
al., 2016; Zotov et al., 2020). In this machine-versus-machine scenario, the whole detection
method then becomes obsolete (Maksutov et al., 2020; Ramadhani & Munir, 2020). Therefore,
it is highly dependent on whether the detection technology can continuously stay one step
to verify and detect fake news (or deepfakes) that might be propagated against their products,
services, and brands (Lee & Shin, 2021; Nieminen & Rapeli, 2019; Zannettou et al., 2019). For
example, Facebook works with third-party fact-checkers to address content that is reported as
effort to increase user responsibility, the company has also developed tools for users to flag
fake content and educates them on how to identify it. Similarly, Google has incorporated fact-
checking into its search engine and Google News to help minimize the spread of false
crowdsourcing to verify the authenticity of news sources (Hern, 2017). Businesses can benefit
from adopting and adhering to similar developmental deepfake policies across online
platforms. One crucial aspect here is equality, as larger firms may be able to leverage legal
resources to battle deepfakes while smaller ones likely cannot. Social media platforms must
enable built-in detection and reporting features that make the playing field even for all operators
facing a risk of “deepfake hijacking” (e.g., using their brand or people as part of a deepfake
In this study, when it comes to legal responses to deception via deepfakes, we found that
legal protection is rather limited in most countries (Karasavva & Noorbhai, 2021; Langa, 2021;
O’Donnell, 2021). In December 2019, the US passed its first federal legislation addressing
deepfakes (Graham et al., 2021). Moreover, some US states have enacted their own laws to
25
address the issue. Deepfake victims have a private right of action in New York and California,
and Virginia has amended its penal code to make sharing deepfakes with necessary intent and
without consent a crime (Graham et al., 2021). Additionally, state laws in the US, such as the
Illinois Biometric Information Protection Act (Illinois General Assembly, 2008), the California
Consumer Privacy Act (State of California Department of Justice, 2018), and the New York
SHIELD Act (The New York State Senate, 2019), are designed to safeguard residents’ personal
information and may offer protection against deepfakes to some extent. However, as Graham
et al. (2021) point out, as deepfake content is fabricated and artificially manufactured,
deceptions.
Commission will play a key role in law enforcement (European Parliament, 2021). The
framework approaches the regulation of AI and its use from a risk-based perspective.
Deepfakes are explicitly covered in terms of “AI systems used to generate or manipulate image,
audio, or video content” and must adhere to certain minimum requirements, such as labeling
content as deepfake to make it clear to users they are dealing with manipulated footage.
However, the framework is still at a proposal level and not yet operational. In other major
economies such as the UK, no legal means are available that offer direct protection from
deepfakes. However, companies and consumers may seek protection through legislation
copyright, and data protection laws (Graham et al., 2021). The newly established Civil Code
of China, Art. 1019, essentially prohibits the violation of image rights by means of information
technology or otherwise (Wei, 2020), which may also offer some degree of protection against
Businesses are typically unable to dictate rules, regulations, and laws. In this current
state of affairs, however, they may monitor and advocate for legislation that protects the rights
innovation while ensuring risks are identified and managed is essential to an organization’s
ability to survive and thrive in a digital world.” Additionally, firms can collaborate with
regulators to develop, implement, and communicate laws or guidelines governing the creation
Our analysis reveals that little research is available on how consumers may protect themselves
characterized the deepfake realm. The diversity of sources involved in the distribution of
deepfakes, their potential for confidentiality, a lack of information quality requirements, the
ease with which material can be manipulated and modified, the lack of contextual clarification,
and the absence of credibility assessment objectives (i.e., subject matter, medium, and source)
substantially complicate the issue of protecting oneself against deepfakes (Hwang et al., 2021;
For consumers in their everyday lives, a rather generalized but crucial protection
mechanism involves developing the capabilities necessary for analyzing and interpreting the
legitimacy of online content (Bulger & Davison, 2018; De Paor & Heravi, 2020; Viviani &
Pasi, 2017). A consideration of the reputation of the information source, the involvement of
trustworthy intermediaries such as experts and/or opinion leaders, and personal confidence
based on first-hand experiences will further enhance their protection (Hwang et al., 2021;
Viviani & Pasi, 2017; Westerlund, 2019; Whittaker et al., 2020). Additionally, developing or
gathering knowledge about products, brands, and services by customers will enhance their
27
potential to identify and avoid misinformation (Lee & Shin, 2021). Here, enhancing analytical
thinking capabilities is of the utmost importance for consumers when examining the credibility
Furthermore, consumers must be aware of the risks at the core of deepfake technologies.
For this to happen, consumers should be vigilant in the virtual environments in which they
constantly interact and develop a basic understanding (or literacy) of the technology and
existing deepfakes. To this end, online tools are becoming available. For instance, Jevin West
and Carl Bergstrom at the University of Washington have created a website called “Which
Face Is Real” (https://www.whichfaceisreal.com). All of the images on the site are either
actual photographs from the FFHQ dataset of Creative Commons and public domain images.
By putting the real and fake photos side by side, the site helps people become more analytical
and contexts represented in one’s online social networks (Torres et al., 2018)—can help
increase awareness of fake content. Additionally, the study indicates that increasing consumer
awareness of fake content, such as deepfakes, has a beneficial effect on verification behavior
and network trust (Torres et al., 2018). Thus, combating social media’s echo chamber effect
through an active exposure to diverse perspectives and networks also represents a viable
individual-level strategy for addressing the deepfake problem (Cinelli et al., 2021; Gillani et
al., 2018). Consumers may even take an offensive coping strategy by refuting the claims
portrayed in fake content by searching for and presenting contrary evidence to protect other
The risks of marketplace deceptions through deepfakes are undeniable for both firms and
consumers. However, in comparison to other forms of deception used solely for unethical and
malicious purposes, the emergence of deepfake technology is unique in that it also brings forth
various positive opportunities. Here, we analyze and present the benefits of deepfakes for
businesses and consumers. As shown in our conceptual framework (Figure 4), similar to
threats, the opportunities afforded by deepfake technologies may also carry spillover effects.
Therefore, the benefits of these technologies for firms are also likely to be advantageous for
For businesses, opportunities include new forms of marketing campaigns, including virtual
content, designing and deploying AI-based solutions to detect and counter deepfakes, and,
ultimately, developing new offerings and business models supported by deepfakes (Farish,
deepfakes to design and execute appealing marketing campaigns at a low cost by replacing
incorporate real humans; rather, they can create artificial human-like models to attract and
engage many fans and followers (Dwivedi et al., 2021). Furthermore, deepfakes may assist in
the removal of language barriers, allowing for the creation of multilingual marketing
campaigns by dubbing videos in different languages and artificially matching lip movements
and facial expressions accordingly (Johnson & Diakopoulos, 2021; Kietzmann et al., 2020).
This enables company executives and celebrities to speak directly to individuals using tailored
29
messages, even addressing customers by name. Deepfakes could also be used to add
deepfakes in marketing is to create virtual brand ambassadors. For example, the Instagram
account @lilmiquela (shown in Figure 6) depicts Lil Miquela, a fictitious idol created using
and AI (Hsu, 2019; Koh & Wells, 2018), Lil Miquela is an artificial social media marketer and
a virtual influencer embodying the appearance and personality traits of a human (Hsu, 2019).
Despite not being real, with more than three million followers as of November 2021, Lil
Miquela has become one of the top influencers on the platform (Blanton & Carbajal, 2019;
Figure 6: Lil Miquela, an artificial social media marketer with more than three million
@lilmiquela)
Lil Miquela exemplifies how brands can develop virtual ambassadors for sponsorship,
disseminating their desired message through a digital avatar. For the new generations of
consumers who enjoy an immersion in social media and virtual reality, artificially created
content may not be categorically less valuable than “real” content, especially if it satisfies their
entertainment needs or other experiential purposes. This is alluded to by the fact that virtual
Magalu, who boasts more than 14 million followers on Facebook, close to six million followers
on Instagram, more than 2.5 million YouTube subscribers, and more than one million followers
on TikTok and Twitter. We did not find any academic studies on the effectiveness of virtual
30
influencers for brands—a topic ripe for future research. However, the fact that several virtual
influencers have millions of followers suggests that deepfake technology can create artificial
provides various opportunities to firms that create educational content, including the ability to
provide learners with knowledge in more convincing ways than traditional approaches
(Westerlund, 2019; Whittaker et al., 2020). This technology enables relatively inexpensive and
easily accessible video production that creates new films or shows or adapts old ones to convey
various pedagogical perspectives (Chesney & Citron, 2019). Also, celebrity voices can be used
to narrate books, memoirs can be read by the author, and historical figures can recount their
stories in their own voices using AI voice cloning software (Martin, 2020). As a result, the
information literacy has been considered a means to mitigate the negative consequences of
misinformation, the technology itself could be used for education and interventions specifically
designed to address the challenges posed by deepfakes (Hollis, 2019; Notley & Dezuanni,
2019).
new field of business for developing AI-based solutions and services that detect synthetic
content from human-generated content and provide consumers with warnings when confronted
with marketplace deception or suspicious content (Maksutov et al., 2020; Torres et al., 2018;
Zotov et al., 2020). Consequently, this opens the possibility of creating and selling services
designed to protect companies and consumers from deepfake deception (Chesney & Citron,
31
2019). Such technologies could expand on a number of services that have emerged in recent
years as a result of consumer concerns about identity theft (Liere-Netheler et al., 2019).
deepfakes. The literature highlights the possibility that the application of deepfakes may enable
firms to develop new offerings or even entirely new business models (Dwivedi et al., 2021).
The technology can act as a valuable personalization tool for products, brands, and services
(Dwivedi et al., 2021; Farish, 2020; Wagner & Blewer, 2019). For example, news organizations
are currently examining ways to improve their efficiency and engagement through the use of
video synthesis and other synthetic media technologies. As an example, the South Korean
television channel MBN presented viewers with a deepfake of its own news anchor Kim Joo-
Ha, a snapshot3 of which is seen in Figure 7. The broadcaster told viewers ahead of time that
the newsreader would be fake and that Kim Joo-Ha was still employed. The firm behind the
deepfake, DeepBrain AI, has stated that it is searching for media customers in China and the
United States, and MBN has stated that it will continue to use the deepfake for breaking news
reports (Foley, 2022). Extending this concept, certain aspects of visual illustration, such as
animated cartoons, comic books, and political cartoons, can be streamlined or even completely
automated using image synthesis tools. Further, as the automation process eliminates the need
for teams of designers, artists, and others involved in the entertainment production process,
product costs are mitigated, enabling individuals to produce content that is indistinguishable
from that of the highest budget productions for little more than the cost of operating their
Figure 7: Deepfake of news anchor Kim Joo-Ha of the Korean television channel MBN
3 https://www.youtube.com/watch?v=IZg4YL2yaM0
32
Innovative applications open new doors in the fields of augmented and virtual reality,
enabling value creation in cyber-physical systems. The technology can be used to create
“digital humans”—artificial, lifelike personas that are both interactive and communicative.
This possibility has been utilized in the concept of profound resurrection and has already been
demonstrated in the tourism sector at sites such as the Salvador Dalí Museum in St. Petersburg,
Florida, which has adopted advanced technology to bring the late Spanish surrealist (who
passed away in 1989) back to life.4 After visitors click a button adjacent to a life-sized screen,
the deepfake-based avatar leaves his easel and approaches them, offering information about his
artwork and the museum. Dalí reintroduces himself to tourists as they exit the museum,
inquiring whether they would like a selfie with him (Mihailova, 2021; Whittaker et al., 2020).
As another example, “Digital Einstein”5 embodies the personality of the actual scientist and
can answer daily quizzes about his life and work, as well as scientific questions using the
experiences with artificial human personages—for instance, in the form of a digital customer
assistant, sales concierge, financial advisor, or healthcare coach (Digital Humans, 2021).
Similar to firms, deepfake technology has been indicated to offer various opportunities for
consumers. In this study, we identified two specific opportunities: 1) the enhancement of the
carry the potential to enhance the digital customer experience (Whittaker et al., 2020). Merging
deepfakes with synthetic AI models brings forward a high degree of personalization for online
consumer interactions, such as online clothes shopping (Kietzmann et al., 2020; Zakrzewski,
2019). For instance, customers will be able to input their primary physical characteristics into
an online clothing store, which will then be able to generate lifelike avatars to aid in purchasing
Thus, deepfakes may be used to create highly tailored material that transforms people
into models, allowing them to virtually try on an outfit before purchasing it. Furthermore,
targeted fashion advertising could be created that differs according to time, weather, and
audience (Westerlund, 2019). The Japanese AI firm “Datagrid” has developed an AI engine
that helps achieve these purposes and automatically generates virtual models for advertising
and fashion. This technology is called systematic model generation and can be used by fashion
this type of application is that consumers may perceive artificial content as catchy, entertaining,
or even emotionally engaging, thus allowing them to derive experiential value from deepfakes.
deployed for social good. For instance, consumers will benefit from their use in removing the
language barriers that frequently impede the delivery of cross-cultural content and require
subtitle reinforcement. The technology will also provide a voice to people who have lost their
own because of medical conditions such as motor neuron disorders. For example, Project
deepfakes with customized synthetic voices based on voice samples provided by vocally
In another example, Amazon has released an experimental Alexa capability that allows
the AI assistant to impersonate the voices of users’ deceased relatives. This capability was
shown at the company’s annual MARS conference in a video depicting a child asking Alexa to
read a bedtime story in the voice of his deceased grandmother (Vincent, 2022). Rohit Prasad,
34
Amazon’s lead scientist for Alexa AI, introduced the video by stating that adding “human
attributes” to AI systems was becoming increasingly vital “in these times of the ongoing
pandemic, when so many of us have lost someone we love.” He added: “While AI can’t
eliminate that pain of loss, it can definitely make their memories last” (Vincent, 2022).
5 CONCLUSION
Deepfakes are highly realistic synthetic media generated by algorithms (Chesney &
Citron, 2019; Maksutov et al., 2020) and typically distributed as social media content. They
carry the potential to create marketplace deceptions for both firms and consumers. Deepfakes
also offer various opportunities (Chesney & Citron, 2019; Dwivedi et al., 2021; Kietzmann et
al., 2020; Westerlund, 2019). The current knowledge on deepfakes is scant and diffuse
(Maksutov et al., 2020; Zotov et al., 2020). In this study, we reviewed and analyzed 74 papers
information science, journalism, and social sciences to generate insights into their implications
for firms and customers. We provide an objective assessment of the risks that deepfake-induced
marketplace deceptions pose to firms and consumers, the protection strategies and mechanisms
against harmful effects, as well as the opportunities that deepfake technology presents.
increasingly uses social media as a source of information. In contrast to the “offline” world,
where individuals have historically minimized credibility uncertainty based on either the
reputation of the knowledge source (e.g., experts and/or opinion leaders) or personal first-hand
experiences, making an evaluation in the digital domain is frequently more complex (Viviani
& Pasi, 2017). The multiplicity of sources involved in the distribution of deceptive content, the
absence of information quality requirements and evaluation, the ease of manipulating and
35
altering information, the lack of contextual clarification, and the existence of several potential
credibility evaluation objectives (i.e., content, source, and medium) make deepfakes very real
and potent threats (Viviani & Pasi, 2017). As artificial content blends seamlessly with authentic
content in digital environments, the terms reality and truth may become less relevant in
comparison to how we humans understand these concepts. Similar to the arguments presented
by Xiao and Benbasat (2011), deepfakes can be used to deceive the marketplace by
The problem is not only that deepfake technology is improving at a very fast pace
(Johnson & Diakopoulos, 2021; Schwartz, 2018); it is that the social processes through which
are under threat and that the very definition of reality is a critical concern (Hwang et al., 2021;
Schwartz, 2018). This is a phenomenon where frequent exposure to false information causes
people to lose faith in what they see and hear. In other words, the danger is not necessarily that
people will be deceived just in the marketplace but that they will also come to regard everything
as deception and lose faith in the marketplace (Kirchengast, 2020; Schwartz, 2018; Tong et al.,
2020). While consumers may accept content that supports their worldviews (even if the content
is fabricated), they may lose interest in facts and develop a postmodernist cynicism in which
“what is pleasurable is genuine.” These effects of the erosion of trust and the muddying of the
borders between real and artificial have left marketers wary. According to recent polls, trust in
major institutions and the media is eroding (Ognyanova et al., 2020), and this trend is likely to
be exacerbated by the proliferation of deepfakes if appropriate controls are not put in place
may lead to further erosion of consumer trust in business in general and marketing in particular
(Di Domenico & Visentin, 2020; Kietzmann et al., 2020). Deception protection and
36
preparedness are crucial for consumers, firms, and the overall marketplace to the extent that it
has been labeled a “critical life skill” (Boush et al., 2015, p. 1). However, most marketing
textbooks and articles on marketplace deception treat it as a topic of purely legal interest,
primarily addressed to corporate attorneys, judges, juries, and government regulators (Boush
et al., 2015; Farish, 2020; Langa, 2021; O’Donnell, 2021; Ray, 2021). Furthermore, research
from technical disciplines such as computer or data science focuses on technology as the
primary path to deception protection (Ramadhani & Munir, 2020; Schwartz, 2018; Zhao et al.,
2020; Zotov et al., 2020). However, our research shows that protecting against deepfake-based
marketplace deception cannot be accomplished through solely legal or technical means and
that it necessitates combining market, circulation, technical, and legal responses as well as
face of the emergence of new and potent technologies (Boush et al., 2015; Schwartz, 2018; Xie
et al., 2020). Through this study, we contribute to the marketplace deception literature by
extending the overall understanding concerning deepfakes (Boush et al., 2015; Darke &
Ritchie, 2007). Indeed, the findings of this work may have broader implications for
comprehending deception beyond the marketplace. For instance, the general public is
continuously being exposed to news about politicians, celebrities, and influencers engaging in
misleading behavior, which is an issue that will only become more pronounced through the use
of deepfakes (Chadderton & Croft, 2006; Xie et al., 2020). In this regard, our study deepens
the existing understanding of various forms of deception, their effects, and the protection
service. While this approach has yielded valuable insights into several key domains of
deepfakes, there is a clear need for research examining the implications of deepfakes from a
37
broader perspective (Dwivedi et al., 2021; Vimalkumar et al., 2021). Moreover, most studies
have perceived deepfakes as a grave danger (e.g., Giansiracusa, 2021; Graham et al., 2021;
Maksutov et al., 2020). This is understandable, as deepfakes can undeniably present a serious
threat to firms and consumers. Furthermore, the technology may appear mystical and
intimidation and fear (Giansiracusa, 2021; Graham et al., 2021; Wagner & Blewer, 2019).
Nonetheless, we have aimed to highlight the dualistic nature of deepfakes (see Figure 4), as we
investigate the potential opportunities presented by this emerging and critical technology. Our
research is among the first to generate and present a balanced understanding of the phenomenon
that takes into account the perspectives of both firms and consumers and combines the
Concerning the novelty of deepfakes in relation to other forms of market deception, the
technological advancements regarding their ease of creation and diffusion make synthetic
content more commonplace than previous market deception manifestations. As a result, firms
and consumers are transitioning into a mixed reality where components of real and fake merge
and fuse. This change has been characterized as the post-truth society and forms a more
pervasive transformation than the previous environment of deception in that, despite presenting
complex schemes and forms of deception, the previous environment was still technologically
limited and not omnipresent in people’s lives in the same way that deepfakes will be. Notably,
deepfakes seem to be part of the transition to a higher degree of digitality in people’s lives,
which involves an increasing amount of time spent in virtual and augmented realities. This
mélange of realities stresses the need for new skills from firms and consumers to cope with
object detection and veracity judgments—cognitive skills that were not required previously.
Paradoxically, part of the deepfake appeal is also its entertainment value to the point that people
38
might, to some extent, enjoy the deception in that it has a certain sense of magic that amuses
and surprises.
Our study carries several implications for firms and managers. Deepfake technologies make it
easier for criminals to perpetrate marketplace deceptions while remaining undetected. This
study offers a comprehensive picture for firms regarding the severity of such threats. Deepfake-
based deceptions could result in direct financial damage, and negative and predatory deepfake
campaigns could destroy a company’s reputation, brand image, and stakeholder trust.
themselves from marketplace deceptions carried out through deepfakes. This includes
investing in technology that enhances a firm’s deepfake detection and avoidance competencies.
At the same time, they should invest in human resources to enhance their capabilities of
managers must pay attention to any potential harm that their consumers may suffer and take
However, we also suggest managers pay close attention to the various commercial
can be highly beneficial, and deepfake technologies could provide advantages in advertising,
brand personification, and customer services. Moreover, we suggest that in addition to videos,
managers should be aware of and benefit from other formats of synthetic media in their
businesses. To this end, based on insights offered by CB Information Services (2021), we show
Table 2: Applications of synthetic media for brands and retailers (Source: CB Information
Services, 2021)
39
In light of the findings of this study, and considering the rapid evolution of the
potential of disrupting entire business models, and many firms may suddenly find themselves
taken by surprise if they do not take these aspects into account. While the application of
deepfakes is currently focused on entertainment and humoristic jokes, the historical trajectory
of technology development has shown that the performance of a given technology tends to
change from humor to action. Such a trajectory may also take place for deepfakes. Therefore,
As with any research, our study has certain limitations. First, we only investigated
scientific papers indexed in three specific databases (Web of Science, ACM Digital Library,
and IEEE Xplore). Despite the depth and breadth they offer in terms of literature coverage, we
have inevitably missed some valuable knowledge available in other databases. Second, we only
Accordingly, any future research that widens this coverage to include such literature will
enhance our knowledge base. Third, we chose the conceptual lens of marketplace deception to
approach the deepfake phenomenon. However, there could be alternative conceptual and
and innovation (Atuahene-Gima, 1996) and ethical marketing (Chonko & Hunt, 1985). As
these alternative perspectives fall outside the scope of the current paper, we leave these for
future research.
Considering we are in the early stages of deepfake research, particularly in the business
domain, a lot remains to be investigated. Here, we make some recommendations for further
40
research in critical areas. Overall, academics, firms, and consumers may benefit from studies
examining the origins and antecedents of deepfakes. Academic and managerial relevance will
also accrue from research aimed at determining the factors that contribute to the visibility of
As our review suggests, consumer skills and aptitudes differ in terms of the ability to
detect fake content, as do their attitudes toward artificial content in general. Future research
should delve further into these distinctions to gain a better understanding of consumers’
nuanced actions and attitudes based on deepfakes and make more precise recommendations for
consumer education. Similarly, while it is self-evident that some ethical rules for deepfake-
based marketing are necessary, they are currently missing from the marketing literature. By
offering this primer on this topic, we propose that the criteria for ethical deepfake use are non-
deceptive (i.e., making it clear that the content is artificial and not real), transparent (i.e.,
identifying the source authority and data from which the content originates), fair (i.e., does not
violate the rights of third parties, whether they are a firm, consumer, or group of consumers),
and accountable (i.e., consumers should be able to opt out of fake content if desired).
identifying and assessing the moral standings of the users of deepfake technologies remains a
vexing challenge, as these technologies can be used for multiple purposes. The legal
implications deserve additional scrutiny. Presently, legal scholars have urged that legislation
officials (Langa, 2021; Ray, 2021; Westerlund, 2019). Here, the critical issue to address is
whether and how regulations or enforcements can be made normatively appealing and
harness the technology for constructive purposes is necessary. These investigations would
benefit from exploring different content modalities. Currently, the focus of deepfakes is on
video content, but there are other content modalities, such as voice, that have potential business
value. For example, synthetic voice creation is already offered as a service by some deep-
learning companies (e.g., Overdub). Thus, one can type a text he or she wants to speak and let
the ML model trained on one’s own voice do the speaking based on a written script. This leads
to interesting implications of hybrid forms of communication, where the author uses a replica
(or a deepfake persona) of themselves to communicate. These and other effects of deepfakes
on business processes in areas like sales and customer service open fruitful avenues for
experimental research.
REFERENCES
Anderson, J. (2020). Are you ready for the next big wave? - KPMG Global. KPMG.
https://home.kpmg/xx/en/home/insights/2018/01/are-you-ready-for-the-next-big-wave.html
Anderson, M. (2022, May 28). Google has banned the training of deepfakes in Colab.
Unite.AI. https://www.unite.ai/google-has-banned-the-training-of-deepfakes-in-colab/
Blanton, R., & Carbajal, D. (2019). Not a girl, not yet a woman: A critical case study on
social media, deception, and Lil Miquela. In Handbook of research on deception, fake news,
Booth, A., Sutton, A., & Papaioannou, D. (2016). Systematic approaches to a successful
Botha, J., & Pieterse, H. (2020). Fake news and deepfakes: A dangerous threat for 21st
century information security. ICCWS 2020 15th International Conference on Cyber Warfare
Boush, D. M., Friestad, M., & Wright, P. (2015). Deception in the marketplace: The
Bulger, M., & Davison, P. (2018). The promises, challenges, and futures of media
Burt, T., & Horvitz, E. (2020, September 1). New steps to combat disinformation.
issues/2020/09/01/disinformation-deepfakes-newsguard-video-authenticator/
Caldwell, M., Andrews, J. T. A., Tanay, T., & Griffin, L. D. (2020). AI-enabled future
CB Information Services. (2021, June 30). Should brands and retailers adopt synthetic
https://www.cbinsights.com/research/what-is-synthetic-media/
Chadderton, C., & Croft, R. (2006). Who is kidding whom? A study of complicity,
seduction and deception in the marketplace. Social Responsibility Journal, 2(2), 207–215.
Chesney, B., & Citron, D. (2019). Deep fakes: A looming challenge for privacy,
Chonko, L. B., & Hunt, S. D. (1985). Ethics and marketing management: An empirical
Cinelli, M., Morales, G. D. F., Galeazzi, A., Quattrociocchi, W., & Starnini, M. (2021).
The echo chamber effect on social media. Proceedings of the National Academy of Sciences,
118(9), 1–8.
43
Confessore, N. (2018, April 4). Cambridge Analytica and Facebook: The scandal and the
https://www.nytimes.com/2018/04/04/us/politics/cambridge-analytica-scandal-fallout.html
Cross, C. (2022). Using artificial intelligence (AI) and deepfakes to deceive victims: The
need to rethink current romance fraud prevention messaging. Crime Prevention and
Darke, P. R., & Ritchie, R. J. (2007). The defensive consumer: Advertising deception,
De Paor, S., & Heravi, B. (2020). Information literacy and fake news: How the field of
librarianship can help combat the epidemic of fake news. The Journal of Academic
Di Domenico, G., & Visentin, M. (2020). Fake news or true lies? Reflections about
Drenten, J., & Brooks, G. (2020). Celebrity 2.0: Lil Miquela and the rise of a virtual star
Dwivedi, Y. K., Hughes, L., Ismagilova, E., Aarts, G., Coombs, C., Crick, T., Duan, Y.,
Dwivedi, R., Edwards, J., Eirug, A., Galanos, V., Ilavarasan, P. V., Janssen, M., Jones, P., Kar,
A. K., Kizgin, H., Kronemann, B., Lal, B., Lucini, B., … Williams, M. D. (2021). Artificial
agenda for research, practice and policy. International Journal of Information Management,
57, 101994.
Eadicicco, L. (2019). There’s a fake video showing Mark Zuckerberg saying he’s in
control of “billions of people’s stolen data,” as Facebook grapples with doctored videos that
44
mark-zuckerberg-instagram-2019-6
Etienne, H. (2021). The future of online trust (and why Deepfake is advancing it). AI and
https://www.europarl.europa.eu/thinktank/en/document/EPRS_STU(2021)690039
Europol. (2022). Europol (2022), Facing reality? Law enforcement and the challenge of
deepfakes, an observatory report from the Europol Innovation Lab, Publications Office of the
events/publications/facing-reality-law-enforcement-and-challenge-of-deepfakes
law should adopt California’s publicity right in the age of the deepfake. Journal of Intellectual
Feng, N., Su, Z., Li, D., Zheng, C., & Li, M. (2018). Effects of review spam in a firm-
initiated virtual brand community: Evidence from smartphone customers. Information &
Fido, D., Rao, J., & Harper, C. A. (2022). Celebrity status, sex, and variation in
Foley, J. (2022). 14 deepfake examples that terrified and amused the internet. Creative
Bloq. https://www.creativebloq.com/features/deepfake-examples
Giansiracusa, N. (2021). Deepfake deception. In How algorithms create and prevent fake
Gillani, N., Yuan, A., Saveski, M., Vosoughi, S., & Roy, D. (2018). Me, my echo
chamber, and I: Introspection on social media polarization. Proceedings of the 2018 World
Graham, N., Hedges, R., Chiu, C., de La Chapelle, F., & van de Graaf, A. (2021, October
12). How can businesses protect themselves from deepfake attacks? Business Going Digital.
https://www.businessgoing.digital/how-can-businesses-protect-themselves-from-deepfake-
attacks/
Guess, A., Nagler, J., & Tucker, J. (2019). Less than you think: Prevalence and predictors
Hern, A. (2017, April 24). Wikipedia founder to fight fake news with new WikiTribune
jimmy-wales-to-fight-fake-news-with-new-wikitribune-site
Ho, S. M., Hancock, J. T., Booth, C., & Liu, X. (2016). Computer-mediated deception:
Hollis, H. (2019). Information literacy and critical thinking: Different concepts, shared
Hsu, T. (2019, June 17). These influencers aren’t flesh and blood, yet millions follow
virtual-influencer.html
46
Hwang, Y., Ryu, J. Y., & Jeong, S.-H. (2021). Effects of disinformation using deepfake:
The protective effect of media literacy education. Cyberpsychology, Behavior, and Social
Illinois General Assembly. (2008). CIVIL LIABILITIES - (740 ILCS 14/) Biometric
Information Privacy Act.
https://www.ilga.gov/legislation/ilcs/ilcs3.asp?ActID=3004&ChapterID=57
Jang, S. M., & Kim, J. K. (2018). Third person effects of fake news: Fake news regulation
Karasavva, V., & Noorbhai, A. (2021). The real threat of deepfake pornography: A
review of Canadian policy. Cyberpsychology, Behavior, and Social Networking, 24(3), 203–
209.
Kietzmann, J., Lee, L. W., McCarthy, I. P., & Kietzmann, T. C. (2020). Deepfakes: Trick
Köbis, N. C., Doležalová, B., & Soraperra, I. (2021). Fooled twice: People cannot detect
Koh, Y., & Wells, G. (2018). The Making of a Computer-Generated Influencer. The Wall
11544702401
Lee, E.-J., & Shin, S. Y. (2021). Mediated misinformation: Questions answered, more
Liere-Netheler, K., Gilhaus, L., Vogelsang, K., & Hoppe, U. (2019). A literature review
Luca, M., & Zervas, G. (2016). Fake it till you make it: Reputation, competition, and
Ludwig, S., Van Laer, T., De Ruyter, K., & Friedman, M. (2016). Untangling a web of
Maksutov, A. A., Morozov, V. O., Lavrenov, A. A., & Smirnov, A. S. (2020). Methods
of deepfake detection based on machine learning. 2020 IEEE Conference of Russian Young
Malbon, J. (2013). Taking fake online consumer reviews seriously. Journal of Consumer
cloning/
Meel, P., & Vishwakarma, D. K. (2020). Fake news, rumor, information pollution in
Mihailova, M. (2021). To dally with Dalí: Deepfake (Inter) faces in the art museum.
Mirsky, Y., & Lee, W. (2021). The creation and detection of deepfakes: A survey. ACM
Mustak, M., Jaakkola, E., Halinen, A., & Kaartemo, V. (2016). Customer participation
Nieminen, S., & Rapeli, L. (2019). Fighting misperceptions and doubting journalists’
Notley, T., & Dezuanni, M. (2019). Advancing children’s news media literacy: Learning
from the practices and experiences of young Australians. Media, Culture & Society, 41(5),
689–707.
Nygren, T., & Guath, M. (2019). Swedish teenagers’ difficulties and abilities to
O’Donnell, N. (2021). Have we no decency? Section 230 and the liability of social media
Ognyanova, K., Lazer, D., Robertson, R. E., & Wilson, C. (2020). Misinformation in
action: Fake news exposure is linked to lower trust in media, higher trust in government when
https://misinforeview.hks.harvard.edu/article/misinformation-in-action-fake-news-exposure-
is-linked-to-lower-trust-in-media-higher-trust-in-government-when-your-side-is-in-power/
Ott, M., Cardie, C., & Hancock, J. T. (2013). Negative deceptive opinion spam.
Proceedings of the 2013 Conference of the North American Chapter of the Association for
Perse, E. M., & Lambe, J. (2016). Media effects and society. Routledge.
49
Pu, J., Mangaokar, N., Kelly, L., Bhattacharya, P., Sundaram, K., Javed, M., Wang, B.,
& Viswanath, B. (2021). Deepfake videos in the wild: Analysis and detection. Proceedings of
Ramadhani, K. N., & Munir, R. (2020). A comparative study of deepfake video detection
(ICOIACT), 394–399.
Ray, A. (2021). Disinformation, deepfakes and democracies: The need for legislative
the moderating roles of type of product, consumer’s attitude toward the internet and consumer’s
Roozenbeek, J., Maertens, R., McClanahan, W., & van der Linden, S. (2021).
for “fake news” epidemic, causal factors and interventions. Journal of Documentation, 75(5),
1013–1034.
Salminen, J., Kandpal, C., Kamel, A. M., Jung, S., & Jansen, B. J. (2022). Creating and
detecting fake reviews of online products. Journal of Retailing and Consumer Services, 64,
102771.
Salminen, J., Mustak, M., Corporan, J., Jung, S., & Jansen, B. J. (2022). Detecting pain
points from user-generated social media posts using machine learning. Journal of Interactive
Marketing, 10949968221095556.
50
Schwartz, O. (2018, November 12). You thought fake news was bad? Deep fakes are
https://www.theguardian.com/technology/2018/nov/12/deep-fakes-fake-news-truth
Schweisberger, V., Billinson, J., & Chock, T. M. (2014). Facebook, the third-person
19(3), 403–413.
Sharma, K., Qian, F., Jiang, H., Ruchansky, N., Zhang, M., & Liu, Y. (2019). Combating
Sivarajah, U., Kamal, M. M., Irani, Z., & Weerakkody, V. (2017). Critical analysis of
Big Data challenges and analytical methods. Journal of Business Research, 70, 263–286.
Privacy Act (CCPA). State of California - Department of Justice - Office of the Attorney
General. https://oag.ca.gov/privacy/ccpa
Stupp, C. (2019, August 30). Fraudsters Used AI to Mimic CEO’s Voice in Unusual
mimic-ceos-voice-in-unusual-cybercrime-case-11567157402
future-of-synthetic-media
Tahir, R., Batool, B., Jamshed, H., Jameel, M., Anwar, M., Ahmed, F., Zaffar, M. A., &
Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, 1–16.
Taylor, B. C. (2021). Defending the state from digital Deceit: The reflexive securitization
The New York State Senate. (2019, June 14). NY State Senate Bill S5575B. NY State
Senate. https://www.nysenate.gov/legislation/bills/2019/s5575/amendment/b
Tong, X., Wang, L., Pan, X., & gya Wang, J. (2020). An overview of deepfake: The
sword of damocles in AI. 2020 International Conference on Computer Vision, Image and Deep
Torraco, R. J. (2016). Writing integrative literature reviews: Using the past and present
Torres, R., Gerhart, N., & Negahban, A. (2018). Epistemology in the era of fake news:
An exploration of information verification behaviors among social networking site users. ACM
SIGMIS Database: The DATABASE for Advances in Information Systems, 49(3), 78–97.
Tranfield, D., Denyer, D., & Smart, P. (2003). Towards a methodology for developing
van Heerde, H. J., Moorman, C., Moreau, C. P., & Palmatier, R. W. (2021). Reality
check: Infusing ecological value into academic marketing research. Journal of Marketing,
85(2), 1–13.
Van Huynh, N., Hoang, D. T., Nguyen, D. N., & Dutkiewicz, E. (2021). DeepFake: Deep
Communications.
Vimalkumar, M., Sharma, S. K., Singh, J. B., & Dwivedi, Y. K. (2021). ‘Okay google,
what about my privacy?’: User’s privacy perceptions and acceptance of voice based digital
Vincent, J. (2022). Amazon shows off Alexa feature that mimics the voices of your dead
mimic-voice-dead-relative-ai
Viviani, M., & Pasi, G. (2017). Credibility in social media: Opinions, news, and health
Vizoso, Á., Vaz-Álvarez, M., & López-García, X. (2021). Fighting deepfakes: Media
and internet giants’ converging and diverging strategies against Hi-Tech misinformation.
Wagner, T. L., & Blewer, A. (2019). “The word real is no longer real”: Deepfakes,
gender, and the challenges of ai-altered video. Open Information Science, 3(1), 32–46.
Wang, Y., McKee, M., Torbica, A., & Stuckler, D. (2019). Systematic literature review
on the spread of health-related misinformation on social media. Social Science & Medicine,
240, 112552.
Wei, C. (2020, May 21). 2020 NPC Session: A Guide to China’s Civil Code (Updated).
civil-code/
Whittaker, L., Kietzmann, T. C., Kietzmann, J., & Dabirian, A. (2020). “All around me
are synthetic faces”: The mad world of AI-generated media. IT Professional, 22(5), 90–99.
Wu, Y., Ngai, E. W., Wu, P., & Wu, C. (2020). Fake online reviews: Literature review,
synthesis, and directions for future research. Decision Support Systems, 113280.
Xie, G.-X., Chang, H., & Rank-Christman, T. (2020). Contesting dishonesty: When and
https://www.washingtonpost.com/news/powerpost/paloma/the-technology-
202/2019/12/13/the-technology-202-businesses-should-be-watching-out-for-deepfakes-too-
experts-warn/5df279f1602ff125ce5b2fe7/
Zannettou, S., Sirivianos, M., Blackburn, J., & Kourtellis, N. (2019). The web of false
information: Rumors, fake news, hoaxes, clickbait, and various other shenanigans. Journal of
Zhao, Y., Yang, S., Narayan, V., & Zhao, Y. (2013). Modeling consumer learning from
Zhao, Z., Wang, P., & Lu, W. (2020). Detecting deepfake video by learning two-level
features with two-stream convolutional neural network. Proceedings of the 2020 6th
Zotov, S., Dremliuga, R., Borshevnikov, A., & Krivosheeva, K. (2020). DeepFake
43–48.
54
FIGURES
(a) (b)
Figure 2: On the left (a), screenshot of a deepfake video of Facebook CEO Mark
Zuckerberg purportedly showing him bragging about his power and crediting a hidden
organization—Spectre—for the success of Facebook. On the right (b), a deepfake video of
David Beckham speaking in nine different languages to generate awareness on malaria. These
examples illustrate how deepfakes can be used for both societally good and harmful purposes.
55
Grouping similar
Marketing relevant codes to answer
texts with codes analytical
questions
Start of
the Study Literature
Database search and Developing
Defining the selection identification analytical Descriptive Further
research aim framework discussion
Selection of reporting
and scope Coding and
critical
synthesizing
phenomenon
Presentation
Initial framework
Search development
Final
WoS: 42 Sample
WoS: 362 ACM: 14
ACM: 177 IEEE: 18
IEEE: 259
40 80
35 70
25 50
20 40
15 30
10 20
5 10
0 0
2017 2018 2019 2020 2021
Year
Figure 4
58
Figure 5
59
Figure 6
60
Table 4: Applications of synthetic media for brands and retailers (source: CB Information
Services, 2021)