0% found this document useful (0 votes)
15 views316 pages

PDF

Uploaded by

abdalla4313
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views316 pages

PDF

Uploaded by

abdalla4313
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Consumer, Marketing,

AI: Dark Sides and Ethics

Editors:
Kürşad Özkaynar
Şevin Abbasoğlu
Published by
Özgür Yayın-Dağıtım Co. Ltd.
Certificate Number: 45503
15 Temmuz Mah. 148136. Sk. No: 9 Şehitkamil/Gaziantep
+90.850 260 09 97
+90.532 289 82 15
[Link]ı[Link]
info@[Link]

Consumer, Marketing, AI: Dark Sides and Ethics


Editors: Kürşad Özkaynar • Şevin Abbasoğlu

Language: English
Publication Date: 2025
Cover design by Mehmet Çakır
Cover design and image licensed under CC BY-NC 4.0
Print and digital versions typeset by Çizgi Medya Co. Ltd.

ISBN (PDF): 978-625-5958-72-3

DOI: [Link]

This work is licensed under the Creative Commons Attribution-NonCommercial 4.0 International
(CC BY-NC 4.0). To view a copy of this license, visit [Link]
This license allows for copying any part of the work for personal use, not commercial use, providing
author attribution is clearly stated.

Suggested citation:
Özkaynar, K. (ed), Abbasoğlu, Ş. (ed) (2025). Consumer, Marketing, AI: Dark Sides and Ethics.
Özgür Publications. DOI: [Link] License: CC-BY-NC 4.0

The full text of this book has been peer-reviewed to ensure high academic standards. For full review policies, see
[Link]
Preface

In the 21st century digital age, artificial intelligence (AI) technologies


are a technical revolution and a paradigm shift leading to transformations in
a wide range of areas, from consumption practices to ethical norms, from
individual preferences to social structure. This book examines the effects of
artificial intelligence in the context of marketing and consumer behaviour
in a multidimensional manner and evaluates both opportunities and threats
from an academic perspective.
Each chapter focuses on different aspects of AI and provides in-depth
analyses with an interdisciplinary approach. From approaches that discuss
how consumer autonomy is being eroded to ethical dilemmas encountered
in AI-supported advertising, from the effects of unrealistic beauty ideals on
individuals’ self-perception to algorithmic manipulations and fake evaluation
systems, many topics are comprehensively covered.
Understanding how AI affects individuals’ decision-making processes
is critical for marketing strategies and consumer welfare, ethical design
principles, and digital rights. Therefore, the book’s main aim is to go beyond
the possibilities offered by AI-enabled systems to assess the cognitive,
emotional and behavioural effects caused by these technologies and propose
constructive solutions for decision-makers, practitioners and academics.
The studies presented in our book deal in depth with current and
controversial topics such as marketing ethics, neuro-marketing, algorithmic
decision-making, the impact of artificial intelligence on creative processes,
dark patterns and fake user reviews. Each chapter combines theoretical
frameworks with empirical findings to give the reader an intellectual
grounding and practical implications.
This book aims to be an important reference source for all academics,
researchers, and professionals who are aware of the new dynamics driving
consumer behaviour in a digitalised world, think critically, and are ethically
sensitive. In this period shaping the future of artificial intelligence, it is a
common call to build a human-centred and sustainable digital consumption
culture that prioritises ethical responsibility.

iii
iv
Contents

Preface iii

Chapter 1

Overcoming the Dark Sides of Artificial Intelligence 1


Şevin Abbasoğlu

Chapter 2

Algorithmic Manipulation: Influencing Consumer Behavior 7


Zeynep Erdoğan
Esen Gürbüz

Chapter 3

Privacy and Artificial Intelligence 29


Engin Yavuz

Chapter 4

Consumer Distrust: Non-Transparent AI Decision-Making Processes 41


Murat Başal

Chapter 5

Algorithmic Biases and Injustice: Ethical and Practical Dimensions of Artificial


Intelligence in Digital Marketing 55
Dr. Bahadır Avşar

Chapter 6

The Misleading Power of AI-Powered Automation 75


Ali Sen

v
Chapter 7

AI, Addiction, and Consumer Well-Being 99


Dicle Yurdakul

Chapter 8

Ethical Dilemmas in AI-Driven Advertising 117


Serim Paker

Chapter 9

The Erosion of Consumer Autonomy 149


Canan Yılmaz Uz
Seda Arslan

Chapter 10

Artificial Intelligence and The Unfairness of Pricing Strategies 171


Aylin Atasoy

Chapter 11

Fake Reviews and Ratings Undermining Consumer Trust 187


Haydar Özaydın

Chapter 12

Consumer Manipulation With Artificial Intelligence: Dark Patterns and


Hidden Techniques 201
Kadir Deligöz

Chapter 13

Social Responsibility and Ethical Approaches in the Management of Artificial


Intelligence 223
Nesibe Kantar

vi
Chapter 14

Unrealistic Beauty Ideals: Artificial Intelligence and Consumers’ Self-Image


Perceptions 239
Feyza Nur Özkan

Chapter 15

Artificial Intelligence Marketing (AIM): Digital Transformation and


Consumer Behaviour 263
Ahmet Songur

Chapter 16

Negative Effects of Artificial Intelligence On Human Creativity Ability 295


Sibel Aydoğan

vii
viii
Chapter 1

Overcoming the Dark Sides of Artificial


Intelligence

Şevin Abbasoğlu1

Abstract
This chapter explores the development, classification, and application of
artificial intelligence (AI), with a particular focus on its ethical implications
and the so-called “dark sides.” It outlines different types of AI—such as
weak, general, and super AI—and their levels of autonomy, emphasizing the
extent to which human intelligence can be mimicked. The chapter examines
how AI technologies, especially in the field of marketing, enhance customer
experiences through personalized recommendations, while simultaneously
raising concerns regarding data privacy, manipulation, and unethical practices
like fake reviews. It argues that sustainable marketing requires building
customer trust by adhering to ethical principles, including transparency,
informed consent, and legal compliance. The discussion concludes that
overcoming the dark sides of AI will enable businesses to establish long-term
customer relationships, foster brand loyalty, and create greater brand value
through ethical, value-driven AI applications.

The Dark Side of Artificial Intelligence


Artificial intelligence is defined as imitated human intelligence that does
not have to be limited by biologically observable methods (McCarthy,
2007:2). Artificial intelligence, which is related to the simulation, expansion
and dissemination of human intelligence, is considered as a sub-field of
computer systems (Shi and Zheng, 2006:810). The world of the 21st
century, guided by technological developments, has created the concept
of artificial intelligence by imitating human intelligence, and artificial
intelligence has become one of the important elements of daily life thanks to
its advanced memory and data processing ability.

1 Dr. Öğr. Üyesi, Sivas Cumhuriyet Üniversitesi Zara Veysel Dursun Uygulamalı Bilimler
Yüksek Okulu, e-Mail: sevinabbasoglu@[Link], ORCID ID: 0000-0001-9269-8298

[Link]
1
2 | Overcoming the Dark Sides of Artificial Intelligence

According to Morandin Ahuerma (2022:1948), there are different types of


artificial intelligence. Based on their cognitive aspects, they are classified into
three types: weak or limited artificial intelligence, general or strong artificial
intelligence (AGI), super artificial intelligence (ASI), and according to their
autonomy, they are classified into four types: reactive artificial intelligence,
thinking artificial intelligence, cognitive artificial intelligence and autonomous
artificial intelligence. Weak or limited artificial intelligence is a system that
can perform specific actions quite well, while strong artificial intelligence is
a system that can think like a human, imitate common sense and empathy
based on general knowledge (Sterne, 2017:10; Binbir, 2021:317). Super
artificial intelligence is also called high-performance artificial intelligence.
Although it fulfills all tasks that require human intelligence, it has the ability
to surpass humans in terms of cognitive and learning abilities (Tzimas, 2021;
Morandín-Ahuerma, 2022:1949). Reactive artificial intelligence is the type
that produces an output depending on the input it receives. As long as the
input remains the same, the action will remain constant. Examples include
spam filters and Netflix recommendation engine ([Link]
what-are-the-four-types-of-ai/). Cognitive artificial intelligence is a type of
artificial intelligence that imitates human intelligence by thinking and trying
to learn ([Link] Autonomous
artificial intelligence is a type of artificial intelligence that can interact with its
environment spontaneously, without any intervention, make decisions and
set goals and strategies based on new drums (Mathews at all., 2021:4).The
increase and development of types of artificial intelligence reveals the fact
that human intelligence can be imitated in every [Link] reality shows
that technology can replace humans in many areas, that many algorithms can
think and work like humans, and that it can be used for good purposes as
well as for bad [Link] increase and development of types of artificial
intelligence reveals the fact that human intelligence can be imitated in every
aspect.
Artificial intelligence, which develops with the opportunities offered
by developing technology, is present in many segments in the field of
marketing as in many fields. Artificial intelligence technology is actively used
in different business areas and sectoral activities. While artificial intelligence
technologies, which can perform transactions through databases, provide
great advantages to businesses, they are thought to have negative effects
on customers in terms of data privacy and ethics. The dark side of artificial
intelligence has emerged with the manipulation of customers by profit-
making applications, especially for businesses operating in the field of
marketing. In the 21st century technology era, where online shopping is
Şevin Abbasoğlu | 3

more preferred, it has been observed that customers are active on online
shopping sites for a long time. During this period, customers who are exposed
to advertisements of various products and services evaluate the activities of
businesses’ artificial intelligence applications, which they define as the art of
influencing customers, as manipulation when evaluated from the customer’s
perspective. Different practices such as fake reviews in the evaluation tabs
of the products, customers identified by businesses that make evaluations as
if they have used the product, extra stars, etc. are not ethically appropriate
and reveal the dark side of artificial [Link] basic ethical values of
artificial intelligence include ensuring that customers use this technology
and its extensions without [Link] main purpose of the studies on
the protection of personal data is to protect customer [Link] fact that
personal data can be processed and shared with third parties negatively
affects the ethical dimension of artificial intelligence-oriented [Link]
values that businesses are expected to offer to customers at this point are
openness and clarity, paying attention to ethical elements, processing data at
a minimum level and customer approval in this process.
Based on the understanding of sustainable marketing, which is becoming
increasingly important in the current century, the basic condition for using
artificial intelligence as a long-term reliable marketing tool is to provide the
necessary customer trust. The basis of marketing strategies is to convince
customers to sell. The formation of purchase intention in customers, even if
the purchase behavior does not occur, is an important factor for businesses
that brings the potential customer one step closer to the business. With the
increasing use and impact of artificial intelligence, traditional methods have
taken a back seat. Although this is an advantageous situation, with the increase
in data processing and the effect of manipulations created due to the increase
in data processing, customers are quickly directed to sales with a faster and
fuzzy decision-making. The rapid progression of the neurological influence
process seems to neutralize the decision-making independence of customers.
It is inevitable that customers who want to make informed decisions and
do not want to be manipulated in line with the principle of ethics and
transparency should have more knowledge in this field. While customers are
expected to improve their digital literacy and protect themselves, businesses
need to ensure the prerequisite of creating customer value by taking legal
regulations into account. In addition to all these, the regulation of dark
parties with legal practices and the establishment of ethical standards and the
implementation of sanctions constitute the basis of sustainable marketing.
For businesses, overcoming the dark sides will create long-term
customer relationships, increase brand value and build brand loyalty.
4 | Overcoming the Dark Sides of Artificial Intelligence

Considering that the positive energy emitted by satisfied customers is more


effective in influencing other customers than many promotional activities,
the importance of using artificial intelligence technologies with ethical
standards is quite clear. In this sense, businesses can offer suggestions such
as identifying and correcting the deficiencies in negative comments made by
online shopping platforms instead of positive comments made through fake
accounts, increasing the quality of the products and services offered instead
of constantly bombarding customers with messages and advertisements
throughout the day, engaging in activities that support customers’ healthy
decision-making processes instead of serving personal data in inappropriate
ways, developing marketing strategies that create customer value within the
scope of openness, transparency and ethical values, and creating systems
integrated with artificial intelligence.
Şevin Abbasoğlu | 5

Kaynakça
Binbir, S. (2021). Pazarlama Çalışmalarında Yapay Zeka Kullanımı Üzerine Be-
timleyici Bir Çalışma, Yeni Medya Elektronik Dergisi, 5 (3), 314-328
Matthews, G., Hancock, P. A., Lin, J., Panganiban, A. R., Reinerman-Jones, L.
E., Szalma, J. L., & Wohleber, R. W. (2021). Evolution and revolution:
Personality research for the coming world of robots, artificial intelligen-
ce, and autonomous systems. Personality and individual differences, 169,
109969.
McCarthy, J. (2007). What is artificial intelligence.
Morandin- Aurherma F., (2022). What is artificial intelligence. International
Journal of Research Publication and Reviews, Vol:3, No:2, pp.1947-1951.
Shi, Z. Z., & Zheng, N. N. (2006). Progress and challenge of artificial intelli-
gence. Journal of computer science and technology, 21(5), 810-822.
Sterne, J. (2017). Artificial intelligence for marketing: practical applications.
John Wiley & Sons.
Tzimas, T. (2021). Legal and Ethical Challenges of Artificial Intelligen-
ce froman International Law Perspective (Springer, Ed.). [Link]
org/10.1007/978-3-030- 78585-7.
İNTERNET KAYNAKLARI
[Link]
[Link]
6 | Overcoming the Dark Sides of Artificial Intelligence
Chapter 2

Algorithmic Manipulation: Influencing


Consumer Behavior

“ if you’re not paying for the product, then you are the product. “
Andrew Lewis
Zeynep Erdoğan1
Esen Gürbüz2

Abstract
Andrew Lewis’s quote, “If you are not paying for the product, you are
the product,” summarizes the functioning of digital platforms. YouTube
(excluding premium membership) and similar social media platforms provide
free services to users while collecting user data to feed their algorithms and
enhance engagement through personalized content and targeted advertising
strategies (Iena, 2023:838-839). In this context, although users do not make
direct payments, the revenue model is fundamentally based on extending the
time spent on the platform.
Algorithms, combined with technologies such as artificial intelligence,
machine learning, and deep learning, offer businesses the opportunity to
analyze consumer behavior and personalize marketing strategies. However,
these technologies are not only innovative tools but also have an aspect that
includes ethical issues and manipulative effects. The algorithms behind digital
technology, for instance, analyze users’ interests to keep them on the platform
longer while also giving them the feeling of “missing out,” thereby influencing
purchasing behavior. Furthermore, presenting content based on users’
emotional states, violations of data privacy, and elements of psychological
pressure bring the ethical dimension of algorithmic manipulation into

1 Research Assistant, Niğde Ömer Halisdemir University, Faculty of Economics and


Administrative Sciences, Department of Business Administration, Department of Production
Management and Marketing, zeyneperdogan@[Link], 0000-0003-1712-3114
2 Prof. Dr., Niğde Ömer Halisdemir University, Faculty of Economics and Administrative
Sciences, Department of Business Administration, Department of Production Management
and Marketing, esen@[Link], 0000-0001-5156-1439

[Link]
7
8 | Algorithmic Manipulation: Influencing Consumer Behavior

[Link] study explains the theoretical foundations of algorithmic


manipulation, its potential negative effects on consumer autonomy, and its
capacity to influence consumer behavior.

1. Introduction
Algorithms, rooted in the field of mathematics, have been developed to
solve specific problems by utilizing mathematical logic and procedural steps
(Sayılı & Dosay, 1991:102; Finn, 2017:17; Miyazaki, 2012; wikipedia.
org/03.01.2025). An algorithm is a procedure consisting of systematically
defined and ordered instructions designed to solve a particular problem
or perform a specific task. Algorithms take a given input, process it
systematically, and produce the desired output (Önder, 2024; Chaudhuri,
2020:2). According to Google’s definition, algorithms are mechanisms
that analyze users’ queries through computational processes and formulas,
transforming them into meaningful answers (Finn, 2017:18). More
generally, an algorithm is a systematic method that processes input values
according to a specific logical framework and transforms them into output
values. While the problem definition identifies the intended output and the
corresponding input-output relationship, the algorithm clearly and explicitly
describes the steps to achieve this goal. In other words, an algorithm is a
structured set of instructions designed to solve a specific problem (Cormen
et al., 2009:5; Miyazaki, 2012; Chaudhuri, 2020:2; Altun, 2018:38).
The term “algorithm” originates from the medieval Western European
term “algorismi” or “algoritmi,” which referred to the calculation method
performed using Hindu-Arabic numerals. Various forms of this term, derived
from the name of the mathematician Al-Khwarizmi, laid the foundation for
the modern concept of the “algorithm” as a specific computational method.
It is suggested that mathematical calculation methods were introduced
to Western Europe through Al-Khwarizmi’s works, ultimately leading
to the emergence of the term “algorithm” (Sayılı & Dosay, 1991:102;
Finn, 2017:17; Miyazaki, 2012; [Link]/03.01.2025; Üngör et al.,
2020:99). Based on the historical relationship between Al-Khwarizmi and
algorithms, the Turkish Ministry of National Education has implemented the
Khwarizmi Education Model to equip students with algorithmic thinking
and problem-solving skills ([Link]/03.01.2025; Coşkun Keskin
et al., 2023:259).
Algorithms are clearly rooted in the field of mathematics, making it
important to note that algorithms existed even before the emergence of
computer science (Finn, 2017:17). However, the advancement of computer
science and technology has facilitated the development of a greater number
Zeynep Erdoğan / Esen Gürbüz | 9

of algorithms (Cormen et al., 2009:14). Mathematical thinking processes


and systematic problem-solving methods have been employed across various
disciplines throughout history. With the development of computer science,
algorithms have become more distinct and structured. Algorithms are at
the core of computer science. Many of the technologies used in modern
computers are built upon algorithms (Cormen et al., 2009:14). Algorithms
serve as the mechanisms underlying numerous technologies, enabling
systems to operate faster, more efficiently, and more intelligently (Shekhar et
al., 2018:674). Additionally, algorithms are defined as systems that develop
effective strategies by providing solution-oriented steps when encountering
any issue or problem (Özkök, 2019:14).
Technologies such as artificial intelligence, machine learning, and big data
analytics enhance the accuracy and functionality of algorithms, facilitating
the lives of both businesses and individuals (Arıcan Kaygusuz, 2023:534).
In this context, machine learning, a subfield of artificial intelligence, has
gained significant popularity in recent years, with many technology
companies achieving remarkable advancements through algorithms in this
domain. For instance, digital platforms like Netflix utilize machine learning
algorithms to analyze users’ past viewing habits and provide personalized
content recommendations (Shekhar et al., 2018:674). Particularly, “deep
learning” stands out as a branch of artificial intelligence that offers significant
innovations in areas such as speech recognition, personal assistants, image
recognition, and security camera analyses. These algorithms, which possess
the ability to learn at a speed comparable to human perception, are referred
to as “deep learning” (Arıcan Kaygusuz, 2023:534).
In addition to their robust infrastructure and numerous advantages,
algorithms also have ethically and socially controversial aspects (Kayıhan
et al., 2021:296). It is argued that algorithms are not only tools used to
solve problems and facilitate human life but are also employed to manipulate
users’ data (Finn, 2017:17). With technological advancements, ethical issues
related to algorithms have been observed to increase. For instance, algorithms
can sometimes lead to ethical concerns such as misinformation, violations of
data privacy, or the creation of discrimination. This situation indicates that
algorithms can be viewed not only as tools for beneficial purposes but also
as instruments carrying the risk of misuse. Therefore, adhering to ethical and
transparency principles is crucial during the development and application of
algorithms (Kayıhan et al., 2021:296).
Etymologically, the term “manipulation,” meaning “to operate” and
“to use” (Özkök, 2019:14), refers to the process of altering information
10 | Algorithmic Manipulation: Influencing Consumer Behavior

through selection, addition, or removal (Fırat, 2008:22-23). The concept of


manipulation has been addressed from various perspectives across different
disciplines and is generally defined as the process of misleading and directing
an event, situation, or individuals (Eryılmaz, 1999:21; Altuncu, 2013:116).
An important aspect of manipulation involves influencing an individual
without fully engaging their rational faculties (Sunstein, 2016, cited
in Christiano, 2022:110). According to Ergül Güvendi (2023:57),
manipulation is related to the psychological and social effects consciously
applied by one individual to influence, direct, and alter the behavior of
another individual against their will. According to Atan et al. (2013:2),
manipulation is the reorganization of data in accordance with a specific
intention, involving the use of misleading methods in the process.
Algorithms operate as a series of step-by-step procedures or commands
executed to achieve a specific goal. In this process, they determine how to
proceed to reach the defined objective and ensure that these actions are
carried out in a particular sequence (Goffey, 2008, cited in Witzenberger,
2017:17). Algorithms perform functions such as analyzing user behavior,
providing content and recommendations based on personal preferences,
guiding individuals or groups toward specific ideas, shaping public opinion,
and delivering content based on emotional states. However, it should be
considered that during the stages of data collection and processing, these
processes may lead to conscious or unconscious manipulations of users.
Manipulation refers to the deliberate intervention in the message
structure and content between the sender and the receiver. This intervention
is typically carried out by the source of the message or through specific tools,
and it affects the cognitive processes of the receiver, influencing them to
generate desired thoughts and ideas, with the aim of changing or directing
individuals’ thoughts for various purposes (Elitaş, 2022:115). Particularly,
the functionalization of data-driven marketing activities indicates that an
approach centered on the consumer, with the primary goal of influencing
them, has been adopted. In this context, algorithms play a critical role
in marketing processes (Karaman, 2021:1341-1344). While consumer
manipulation was previously carried out through traditional communication
channels, with the widespread use of digital environments today, this
process is also conducted through digital channels. The spread of deceptive
content in digital and internet environments, the exploitation of individuals’
vulnerabilities, social media addiction, social isolation, digital harassment,
data privacy violations, and unethical practices such as guiding individuals
through manipulative methods are becoming increasingly significant issues
Zeynep Erdoğan / Esen Gürbüz | 11

(Şen, 2024:18). In this context, it should not be overlooked that algorithms


are not only tools that improve user experience but also mechanisms
that direct and even manipulate individual behaviors. In this regard, the
widespread use of algorithmic manipulation and the magnitude of its effects
are critical issues that must be considered. This section explains the potential
of algorithms to influence consumer behaviors through data manipulation.

2. Algorithmic Manipulation and Its Theoretical Foundations


Algorithms are fundamental elements that determine the operation
of digital systems and infrastructures. They not only guide the operation
of a single software or device but also form the foundation of a wide
technological system, including the internet, mobile devices, digital services,
and network infrastructures (Kitchin, 2014:11; Karaman, 2021:1341).
Algorithms are powerful tools that are rapidly evolving and integrated
into many different areas of our lives today (Finn, 2017:15; Witzenberger,
2017:25). Algorithms not only regulate the operation of technical systems
but also form the foundation of technological processes that have deep
impacts on daily life (Kitchin, 2014:11; Karaman, 2021:1341). On
digital platforms, manipulation strategies of algorithms are referred to as
“algorithmic manipulation.” This manipulation strategy processes user data
to direct individuals toward specific behaviors (Vangeli, 2023:15). The
effect of manipulative messages prevents individuals from making rational
evaluations within the information pollution (Elitaş, 2022:116).
Manipulation is a type of psychological influence that shapes people’s
behaviors and thoughts, targeting social consciousness. Directing individuals
to perform certain actions unconsciously and creating a socio-psychological
control mechanism that is difficult to notice by the target audience are
among the core functions of manipulation (Rohach & Rohach, 2021:47).
Algorithmic manipulation becomes more complex and problematic as online
algorithms are trained with more personalized user data. Algorithms are fed
with users’ health status, age, past experiences, and other personal data; they
use this information to better predict and guide user behaviors (Vangeli,
2023:15). The prominent aspect of algorithmic manipulation is the amount
of data that algorithms can process, the accuracy of targeting individuals,
and the ability to calculate and continuously update all these processes at
high speed (Christiano, 2022:115).
Algorithms have powerful functions such as controlling the flow of
information, shaping user behaviors, and influencing social processes
(Witzenberger, 2017:18). Hypernudging is a strategy that attempts to
12 | Algorithmic Manipulation: Influencing Consumer Behavior

influence individuals’ decisions by presenting relationships and connections


determined by algorithms to users (Rickert, 2024:424). Hypernudging
enables algorithms to dynamically restructure the guidance process based
on the data they receive. In this process, algorithms are updated according
to individuals’ current behaviors and previous interactions, providing
personalized guidance (Christiano, 2022:115).
Recommendation algorithms, as part of recommendation systems, work
by analyzing data such as users’ past behaviors, preferences, and profiles to
predict whether they will like a specific product or content (Isinkaye et al.,
2015:262). Persuasion algorithms, on the other hand, are systems designed
to encourage individuals to adopt a specific behavior change (Albers et al.,
2022:2). These types of algorithms are strategically used to influence users’
decisions and guide them toward a specific goal (Karaman, 2021:1341).

3. Areas of Application and Behavioral Effects of Algorithmic


Manipulation
Algorithms use data to perform analysis, make accurate predictions,
and contribute to the efficient operation of processes in order to achieve a
specific goal (Witzenberger, 2017: 24-25). With advancements in computer
science, algorithms are no longer limited to academic or technical fields; they
are also finding widespread application in the business world (Karaman,
2021:1341). The fact that algorithms are indifferent to ethical and moral
values in social interactions and fail to adhere to ethical rules while managing
human social relationships and interactions is noted as a significant issue.
This indifference can lead to algorithms influencing people in a manipulative
manner (Vangeli, 2023:13).
The power of algorithms has become more visible, particularly in fields
such as digital media, artificial intelligence, data analytics, and social media
(Witzenberger, 2017:18; Saurwein & Spencer-Smit, 2021:223). Individuals
are continuously interacting with structures shaped by algorithms at every
stage of their daily lives in the digital world, from online dating to route
navigation, information searching to shopping (Striphas, 2015, cited in
Witzenberger, 2017:17).
Every online action of users is added to the data set, allowing algorithms
to use this data to develop strategies for more accurately predicting and
influencing individuals’ behaviors (Vangeli, 2023:15). For example, when a
user wants to purchase a book online, the system analyzes their behavior in
detail. All interactions, such as the products the user has viewed, purchased, or
added and removed from the cart, are recorded, and these data are compared
Zeynep Erdoğan / Esen Gürbüz | 13

with the behaviors of other users with similar interests. As a result, the
algorithms created from this process offer personalized recommendations,
helping to understand user behavior (Özkök, 2019:9-10) and shaping the
user experience.
Algorithms are used in various fields, ranging from stock market
transactions (such as investment decisions and trading strategies) to music
composition (such as creating lyrics and melodies), from autonomous
vehicles to writing news articles (Finn, 2017:15). With these developments,
technological advancements and digitalization also significantly expand
areas susceptible to manipulation (Atan et al., 2013:2). This situation brings
with it the foreseeable risk of increased manipulation through widely used
algorithms.
Algorithmic manipulation, unlike traditional manipulation techniques,
offers more systematic and targeted interventions by utilizing big data
and artificial intelligence systems (Vangeli, 2023:13). In this regard,
manipulations can be carried out by companies and other organizations
in various environments and contexts, for different purposes (Ljubičić &
Vukasović, 2023:11).
Algorithms encompass a wide range of disciplines (Witzenberger,
2017:25). Manipulation, on the other hand, is a phenomenon that is
commonly encountered across different disciplines and various application
areas (Begtimur, 2022:10). A review of the literature reveals not only the
concept of algorithmic/algorithm manipulation (Fletcher, 2021; Galli,
2022; Vangeli, 2023; Fu & Sun, 2024), but also the manipulation concept
being addressed in different contexts: digital manipulation (Reaves et al.,
2004; Singh et al., 2024; Mucundorfeanu et al., 2024; Elitaş, 2022), market
manipulation (Putniņš, 2012; Li et al., 2024), digital market manipulation
(Calo, 2013; Greiss, 2021), marketing manipulation (Ljubičić & Vukasović,
2023; Jiaying & Lasi, 2023), consumer manipulation (Witte, 2023; Li &
Li, 2023; Reuille-Dupont, 2023; Quinelato, 2024), online manipulation
(Susser et al., 2019; Susser et al., 2019a; Boldyreva et al., 2018; Botes,
2023), social media manipulation (Bastos, 2024; Maathuis & Kerkhof, 2023;
Maathuis & Godschalk, 2023), FoMO (fear of missing out) manipulation
(Tan et al., 2024; McKee et al., 2023), manipulation of needs (Lodziak,
2003; Yılmaz & Tatoğlu, 2024; Senemoğlu, 2017; Rohach & Rohach,
2021). These manipulation concepts can be applied in different areas (e.g.,
politics, finance), and are particularly common in the field of marketing.
While politics and marketing are the areas where manipulation is most
prominently used, its effects have also been observed in many disciplines such
14 | Algorithmic Manipulation: Influencing Consumer Behavior

as media, psychology, finance, and public relations (Begtimur, 2022:10).


The primary reason for this is that manipulation is a powerful method aimed
at influencing and directing human behavior (Vangeli, 2023:2; Michalak
& Stypiński, 2023:196). In this context, politicians, managers, mass
communication actors, and marketers are among the groups that have most
effectively utilized manipulation throughout history (Begtimur, 2022:10).
In marketing, manipulation techniques can be applied in promotional and
business activities to facilitate the sale of products and services (Vukasović &
Ljubičić, 2022:104). This leads to the possibility of consumers encountering
manipulation techniques in their daily lives (Ljubičić & Vukasović, 2023:11).
Advertising strategies, pricing policies, shrinkflation3, consumer purchasing
processes, product features, product placement, labeling, packaging designs,
fake word-of-mouth (WOM), fake user reviews, campaigns, and consumer
experience, when combined with the use of personal data on online
platforms, result in the widespread use of manipulation techniques in the
marketing field. This can lead to consumers being guided consciously or
unconsciously, directly affecting their decision-making mechanisms.
The impact of manipulation in fields such as journalism, photography,
and social media is becoming increasingly evident (Atan et al., 2013:2).
Social media stands out as an effective tool for manipulating masses, and
it is noted that manipulative content can spread rapidly through these
platforms (Atan et al., 2013:2). Platforms like social media, e-commerce,
and search engines present content based on users’ interests, and techniques
such as hypernudging and micro-targeting4 are used in this process (Çaycı,
2021:909). For example, Facebook and other advertising platforms use
user data for marketing purposes by allowing advertisers to select specific
users and target them with well-crafted messages (Chouaki et al., 2022:1).
Applications like filter bubbles ensure that the social media algorithm only
allows the individual to consume information that aligns with their interests
and ideology (Çaycı, 2021:909). Filter bubbles5 are cognitive barriers that

3 Shrinkflation: It is a strategy where the size, quantity, or weight of a product is reduced while
keeping the price constant or limiting the price increase to a minimum level (Erdoğan &
Gürbüz, 2023:1). This strategy is considered a manipulative marketing method because it
may lead consumers to unknowingly purchase less product for the same price. Especially when
the reduction in product quantity or size is not explicitly stated, consumers may engage in
purchasing behavior without noticing this change, creating the impression that companies are
manipulating consumer behavior.
4 Micro-targeting aims to deliver engaging and relevant messages to individual users,
encouraging them to pay attention to the advertisement or take a desired action (such as
making a purchase or sharing the message on their social networks).
5 Filter bubbles (the personalized flow of information tailored to an internet user's preferences
and past interactions) can limit how a person views the world and what information they can
Zeynep Erdoğan / Esen Gürbüz | 15

emerge as a result of excessive personalization, limiting digital consumers’


ability to notice alternative offers, products, or service options (Karaman,
2021:1347-1348). In this regard, journalist and writer Serdar Kuzuloğlu
states that “the addictive nature of social media platforms for users and their
continuous use throughout the day does not indicate that the content is consumed
unconsciously or of high quality. The main reason for this is the influence of the
algorithms operating behind the content.” According to Kuzuloğlu, algorithms
are developed as a result of the collective efforts of psychiatrists, psychologists,
behavioral scientists, algorithm experts, and other scientists from various
disciplines (gencenderun/[Link]/19.02.2025).
The role of manipulation in the communication process is also quite
prominent, and it is well known that mass media plays a central role in
manipulation strategies. Mass media not only targets individuals but also
communities, functioning as a tool for mass guidance (Elitaş, 2022:115-
116). In the context of mass media, manipulation is manifested through
the misguiding and directing of the masses with a one-way flow of news.
Information from the news source is restructured during the process from
production to consumption and presented in different contexts, gaining a
manipulative function (Fırat, 2008:22-23). Especially in digital environments
where individuals are constantly online, manipulation strategies through
visual and auditory messages are applied systematically (Elitaş, 2022:115-
116).
Algorithms play a critical role in various sectors such as healthcare,
finance, transportation, education, and agriculture, aiming to increase
efficiency, optimize processes, and make more accurate predictions. In
this regard, it can be said that algorithms have a significant impact across
numerous sectors and have become an indispensable element of daily life
(Arıcan Kaygusuz, 2023:534). However, effective management of this
process requires the use of data (Witzenberger, 2017:17). Algorithmic
analyses based on personal data and user behaviors have the potential to
influence individuals’ decisions. These strategies are implemented through
the use of personal data (Karaman, 2021:1341), and it is known that they

access. When the content on the internet is solely customized for the individual, it may become
difficult for them to encounter different perspectives and new information. In other words,
a filter bubble refers to the intellectual isolation created when websites selectively present
information through algorithms that analyze data such as users' clicking habits, browsing and
search history, and location. In this case, users are only exposed to content that aligns with
their interests and previous preferences, significantly reducing the likelihood of encountering
differing opinions and alternative information (Pariser, 2011 cited in Boyacı Yıldırım &
Özgen, 2024:511).
16 | Algorithmic Manipulation: Influencing Consumer Behavior

are more likely to produce manipulative outcomes. In practice, when data is


processed, attention is drawn to the Personal Data Protection Law.

4. The Effects of Algorithmic Manipulation on Consumer


Behavior
Businesses aim to influence consumers’ decision-making processes in favor
of their products or services by utilizing various stimuli and communication
techniques (Yurtsever & Akın, 2022:257). To achieve this, they intentionally
implement various strategies to capture consumers’ attention and enhance
their loyalty. However, at a certain point, these strategies go beyond merely
persuading the consumer and start to subtly and covertly direct their behavior,
essentially manipulating them (Reuille-Dupont, 2023:17). For instance, the
smell of bread in a supermarket evokes positive associations and encourages
consumers to purchase, or the use of the color green creates the perception
that a product is environmentally friendly, both serving as concrete examples
of this phenomenon (Akgün, 2021:271).
The field of marketing has always been an unexplored aspect of the
economic system, and each year, marketing techniques and marketing itself
evolve in parallel with new technologies. (Vukasović & Ljubičić, 2022:103).
With the expansion of marketing, the area of manipulation, which now
has many subheadings and subcategories, is also growing. Consumers’
right to make free choices is a fundamental source of motivation that
shapes their behavior. However, even when consumers are independent of
external influences, they may not have full control over the outcomes of
their decisions (Wertenbroch et al., 2020:430-431). According to a study,
an algorithm created by analyzing data on a consumer’s shopping receipt
reveals that consumers who buy chips often purchase cola as well. Based on
this information, store management may aim to increase sales by placing the
chips and cola shelves next to each other to optimize sales strategies (Arıcan
Kaygusuz, 2023:534).
Businesses continually focus on consumers’ needs and, in order to
attract them, may resort to exploiting their thoughts and desires or creating
marketing strategies with deceptive guidance (Yurtsever, 2023:52).
Marketing strategies aim to shape consumers’ perceptions and guide their
purchasing decisions, sometimes incorporating elements of conscious
manipulation. Specifically, the unconscious direction of consumers toward
certain preferences makes the role of manipulation in marketing processes a
subject of debate.
Zeynep Erdoğan / Esen Gürbüz | 17

It is stated that the information obtained about consumer behavior can be


used not only to understand the consumer and shape production processes
based on their needs, but also to manipulate the consumer. This information
can be used to consciously or unconsciously influence consumers’ purchasing
decisions in favor of the business (Strang, 2014:248-249). A business can
influence the consumer’s decision-making process with covert and targeted
strategies, often in its own interest. In this case, although the consumer may
think they are making decisions freely, the majority of these decisions are
actually shaped within the framework pre-determined by the organization
(Witte, 2024:3-4). The consumer, unaware of the manipulation, may
believe they are making an independent choice, but in reality, these decisions
have been directed through manipulative methods (Vukasović & Ljubičić,
2022:103).
The primary goal of manipulations is to prevent consumers from making
conscious and rational decisions, encouraging them to purchase a particular
product or service through automatic responses or emotional influences. This
allows businesses to gain control over consumer behavior. The strategies used
in these types of manipulations generally involve emotional, psychological,
and behavioral tactics aimed at influencing consumers’ decision-making
processes without their awareness (Susser et al., 2019:1). Baron (2003)
categorizes manipulation into three main categories: deception, coercion,
and strategies based on emotions, emotional needs, or character weaknesses
(Baron, 2003, cited in Witte, 2024:3-4). Similarly, Michalak and Stypiński
(2023:203) emphasize in their research that manipulation, particularly based
on influencing emotions, has a significant impact on consumer decisions.
One of the new dimensions that manipulation has gained with
digitalization is algorithmic manipulation, especially conducted through
algorithms (Vangeli, 2023:13). Digital marketers can influence consumer
decisions through algorithms and direct these decisions in a way that creates
the illusion that consumer autonomy is preserved and they are making
their own choices. In this case, while consumers are made to feel that they
have more control and freedom, in reality, their behaviors and choices
are predetermined and directed by algorithms (Wertenbroch et al., 2020:
430-431). These algorithms used on digital platforms strategically filter
and organize the content individuals are exposed to, thereby shaping their
preferences and behaviors in a specific direction (Vangeli, 2023:13).
Algorithms provide a significant advantage for businesses in analyzing
consumer behavior. By processing big data analytics quickly and efficiently,
they determine consumer preferences, habits, and needs, thereby helping
18 | Algorithmic Manipulation: Influencing Consumer Behavior

businesses develop strategies that better meet customer expectations.


Additionally, algorithms optimize business processes, increase efficiency, and
accelerate decision-making processes. In this regard, algorithms become a
core component of consumer-focused applications and innovative business
models (Karaman, 2021:1341). Furthermore, on e-commerce platforms,
algorithms offer personalized product recommendations based on users’ past
purchasing behavior and habits, while on social media platforms, they suggest
content based on viewing habits (Arıcan Kaygusuz, 2023:534). Moreover,
businesses aim to increase consumer engagement and gain an economic
advantage over competitors by pre-designing consumer interactions. These
strategies increasingly blur the line between persuasion and manipulation. In
the process of directing consumers toward specific decisions and behaviors,
algorithmic manipulations are used by analyzing their preferences and habits
(Özuz Dağdelen, 2024:35). To reduce the excessive burden of options that
consumers face when making choices, businesses use recommendation
algorithms and targeting methods. These algorithms can enhance perceived
autonomy by making it easier for consumers to find the products and
information they prefer. However, at the same time, these systems can
expose consumers to more external influences during their decision-making
process, which may weaken their real autonomy. This creates a paradox
between perceived autonomy and real autonomy (Wertenbroch et al., 2020:
432). Although digitalization is said to make consumer behavior more
independent and faster, granting greater autonomy in decision-making
processes (Şen, 2024:18), the ability to influence consumers through
digital manipulation techniques can violate this autonomy and hinder their
decision-making processes with their free will (Susser et al., 2019:8). Thus,
while the algorithms underlying digitalization provide consumers with
more information, they also have the power to shape their decision-making
processes through hidden influences.
The intervention of algorithms can weaken consumers’ ability to make
independent choices and may turn them into a part of a strategic manipulation
aimed at keeping them on the platform for longer periods. This leads to
an unconscious impact on the consumer’s autonomy6 (Wertenbroch et al.,
2020: 432). For example, false reviews and ratings can create a misleading
impression about the quality of a product or service. Comments such as

6 Consumer autonomy refers to an individual's ability to remain independent from external


pressures, particularly from excessive influence or manipulation by marketers, during the
purchasing or decision-making process. In this context, it means that consumers can make
their decisions solely based on their own information and will, without external imposition or
control (Drumwright, 2016 cited in Bjørlo, 2021:2).
Zeynep Erdoğan / Esen Gürbüz | 19

“Amazing! The product exceeded my expectations, you must buy it!” may be used
to portray a product as being of higher quality than it actually is, even though
the reviews are fake. All of these strategies can disrupt consumers’ more
conscious decision-making processes and manipulate their behaviors. These
types of manipulations can have negative effects, especially on consumer
autonomy (Susser et al., 2019:1).
Consumers may share small amounts of personal data in order to gain
autonomy. For example, when users conduct a Google search to obtain
useful information, they share their data in exchange for information.
However, these small-scale data-sharing actions can lead to a larger flow of
data and manipulation over time. As a result, consumers may unknowingly
lose their autonomy. This situation is often likened to the famous “frog in
boiling water” story (Wertenbroch et al., 2020: 432). For instance, when a
consumer buys a shirt from an e-commerce site, the website’s algorithm may
suggest similar clothing or complementary products. While this may appear
to be a recommendation system based on the consumer’s preferences, over
time, the system may lead the user towards specific products, potentially
causing manipulations that lead to impulsive shopping decisions (Çalapkulu
& Buran, 2023:142).
In today’s society, many systems that are part of consumers’ social lives
(such as online shopping, search engines, and navigation apps) operate
through algorithms (Striphas, 2015, cited in Witzenberger, 2017: 17).
Through social media platforms, individuals can benefit from consumer-
oriented positive contributions such as information sharing, participation in
public discussions (Vangeli, 2023: 2), access to entertainment, socializing,
and freedom of expression. These platforms, while having functions such as
raising social awareness, organizing awareness campaigns, and strengthening
interpersonal bonds, can also provide a space for the spread of disinformation
campaigns and manipulative content (Yılmaz, 2024: 3; Tekke & Lale,
2021: 56). Deepfake technology can be used to manipulate faces with high
realism. Nowadays, there are numerous deepfake videos created, particularly
targeting celebrities and politicians, circulating on the internet. These videos
are often used to damage the reputations of celebrities or to manipulate
public opinion, posing a serious threat to social stability (Yu et al., 2021:
607).
Social media algorithms provide businesses with valuable data about
consumers, enabling them to gain insights and improve user experience
(Saurwein & Spencer-Smith, 2021: 225). While they offer users an
environment where they can move freely and select and view the content
20 | Algorithmic Manipulation: Influencing Consumer Behavior

they desire, they can also have negative effects that raise societal concerns.
One of the primary negative impacts is the potential for algorithms to
create an infrastructure that encourages harmful behaviors (Saurwein &
Spencer-Smith, 2021: 225). Social media applications use various strategies
through algorithms to keep users on the platform for longer periods. This
can weaken individuals’ perception of making free choices and lead them
to display behaviors that are unconsciously guided, or in other words,
manipulated (Wertenbroch et al., 2020: 432). For example, after viewing a
brand’s page on Instagram, the consumer may be shown advertisements for
similar brands, businesses, and products, which are facilitated by algorithms
within the platform (Çalapkulu & Buran, 2023: 142). Facebook uses data
and algorithms to determine whether users belong to ethnic minority groups
and serves them targeted advertisements specific to those groups (Saurwein
& Spencer-Smith, 2021: 227). During the 2016 U.S. Presidential election,
the Trump campaign used Facebook ads to specifically target African
American voters (Green & Issenberg, 2016: 1). In this context, algorithms,
particularly through online advertising, can also lay the groundwork for
discriminatory practices. Algorithms are used as an infrastructure to target
or exclude certain user groups, which can lead to various harmful effects
(Saurwein & Spencer-Smith, 2021: 227).
The widespread adoption of new digital and online sociotechnical
systems, such as artificial intelligence-based social media, micro-targeted
advertising7, and personalized search algorithms, has led to significant
changes in the ways user interactions, data collection, and behavior influence
are conducted. However, because these technologies and techniques have
the capacity to target and influence individuals on an unprecedented scale, in
a more sophisticated, automated, and pervasive manner, they raise concerns
about their manipulation potential and spark various debates (Ienca, 2023:
833). As a person continues to use a social media platform like Instagram,
the platform collects more data about the user’s online habits. This data is
recorded, classified, and analyzed, creating a personalized mental model of
the user (Jago, 2022: 159). This model allows for the delivery of personalized
content and advertisements based on the user’s interests, interaction patterns,
and behavior.

7 Micro-Targeted Advertising: A technique used by advertisers to deliver personalized and highly


targeted messages to specific individuals or groups based on demographic, behavioral, or
psychographic characteristics. This technique involves collecting and analyzing large amounts
of data from various sources, such as social media platforms, search engines, and third-party
data providers, and using this data to create highly customized advertising campaigns (Ienca,
2023: 839).
Zeynep Erdoğan / Esen Gürbüz | 21

Through algorithms and personalization techniques, platforms such as


YouTube, Netflix, and Instagram recommend similar content as users watch
videos they like (Susser et al., 2019:1). Although features like Instagram’s
Reels, TikTok’s Explore, and YouTube’s Shorts claim to offer users the
opportunity to choose the videos they want, the majority of this content is
directed by the platforms’ algorithms. While users may think they are making
free choices, the algorithms present content based on their interests and
previous interactions, thereby guiding their attention to certain videos. This
situation creates an illusion of freedom, while users are actually exposed to a
content flow determined by the platforms (Wertenbroch et al., 2020:432).
Bjørlo (2021:15) argues that the weakening of consumer autonomy
hinders individuals’ ability to make decisions freely, and that this has negative
effects on consumer welfare and social sustainability. In this context, it is also
significant that manipulation conducted through artificial intelligence and
related digital technologies is qualitatively no different from manipulation
through human-to-human interactions in the physical world, and can violate
certain fundamental freedoms or rights concerning the individual’s mind
and thoughts (Ienca, 2023:833).

Conclusion and Recommendations


Businesses view being strong and effective as a key strategy to increase
consumption rates and direct consumers to their own brands. Digital channels
play a critical role in providing access to and interaction with consumers,
while the data collected from these platforms enable the development of
personalized marketing strategies. Technological advancements offer the
potential to enhance consumer experiences in conjunction with marketing
activities. However, marketing strategies implemented on digital platforms
do not always remain within ethical boundaries. Manipulative techniques,
which risk directing or misleading consumer behavior, lead to ethical debates.
In digital environments, algorithms are among the key elements that
guide consumers and increase consumption behavior. The data collection and
processing capabilities of digital platforms make algorithmic manipulation
an important tool in marketing strategies. While algorithms shape consumer
behavior through targeted product or service presentation, they can also
pave the way for unethical practices. The impact of AI-powered algorithms
on marketing is increasing, but this impact does not always have a positive
outcome and carries the risk of undermining consumer autonomy. Particularly,
AI-based manipulation techniques promote unconscious consumption
decisions, raising serious ethical concerns.
22 | Algorithmic Manipulation: Influencing Consumer Behavior

In the future, as marketing practices based on algorithms become more


widespread, this will require stricter and normative regulations regarding
ethical use. In this context, regulations play a crucial role in protecting
consumer rights and keeping businesses within ethical boundaries. At the
same time, it is important for consumers to recognize the algorithmic
manipulations they encounter in digital environments and make conscious
decisions. Consumer awareness and education should accelerate in parallel
with the rise of digital manipulation. This study, which could guide future
research, offers several suggestions to the literature:
• Studies should be conducted to examine the effectiveness of educational
programs aimed at increasing consumer awareness of algorithmic
manipulation. The role of digital literacy in combating manipulation should
be addressed in detail.
• Research should focus on examining the impact of algorithmic
manipulation on consumer autonomy. Particularly, studies that define
ethical boundaries and evaluate whether these boundaries are violated are of
significant importance.
• The use of AI-based algorithms in marketing processes, how they shape
ethical boundaries, and their long-term effects on consumer autonomy
should be investigated.
• Research on the effects of global and local regulations aimed at setting
ethical standards in digital marketing practices and limiting algorithmic
manipulation should be increased.
• Studies should be conducted to examine the effects of algorithmic
manipulation used on social media platforms on consumer behavior.
Zeynep Erdoğan / Esen Gürbüz | 23

References
Albers, N., Neerincx, M. A., & Brinkman, W. P. (2022). Addressing People’s
Current And Future States In A Reinforcement Learning Algorithm For
Persuading To Quit Smoking And To Be Physically Active. Plos One,
17(12).1-31.
Altuncu, D., Çelebi Şeker, N. N., & Karaoğlu, M. (2013). Mekan Algısında
Duyuların Etkisi/Manipülatif Mekanlar. Sanat Tasarım ve Manipülasyon
Sempozyum Bildiri Kitabı İçinde, Sakarya Üniversitesi, Sakarya, 115-119.
Akgün, A.A. (2021). Tutundurma Dışındaki Pazarlama Karması Unsurları Bağ-
lamında Pazarlama İletişimi ve Manipülasyon. (Ed. Osman Çalışkan). Çar-
pıtılmış Gerçekliğin İnşası Cilt 1 Medya Ve İletişim Mesleklerinde Mani-
pülasyon. Nobel Yayınları. Ankara.
Altun, C. (2018). Okul Öncesi Öğretim Programına Algoritma Ve Kodlama
Eğitimi Entegrasyonunun Öğrencilerin Problem Çözme Becerisine Etki-
si. Yayınlanmamış Yüksek Lisans Tezi. Eğitim bilimleri Enstitüsü. Ankara
Üniversitesi
Arıcan Kaygusuz, N. (2023). Nöropazarlama ve Yapay Zekâ İlişkisinin Tüketici
Davranışları Üzerindeki Etkisine Yönelik Kavramsal Bir Model Önerisi.
Journal of Academic Social Science Studies, 16(95).527-547.
Atan, A. Uçan, B. & Renkçi, T. (2013) Çağdaş Sanat Ve Tasarımda Manipülas-
yon Etkileri. Sakarya Üniversitesi Güzel Sanatlar Fakültesi 1. Uluslararası
Sanat Sempozyumu Sanat, Tasarım ve Manipülasyon Sempozyumu Bildiri
Kitabı.
Bastos, M. (2024). Social Media Manipulation. In Brexit, Tweeted. Bristol Uni-
versity Press. 104-114.
Begtimur, M.E. (2022). Oyunun Adı: Manipülasyon. (Ed. Cihad İslam Yıl-
maz). Psikolojik Harp, Post-Truth ve Stratejik İletişim. NEU [Link].
Bjørlo, L., Moen, Ø., & Pasquine, M. (2021). The Role Of Consu-
mer Autonomy In Developing Sustainable AI: A Conceptual
Framework. Sustainability, 13(4).
Boldyreva, E. L., Grishina, N. Y., Duisembina, Y., L Boldyreva, E., & Y Gris-
hina, N. (2018). Cambridge Analytica: Ethics And Online Manipulation
With Decision-Making Process. European Proceedings of Social and Be-
havioural Sciences, 51.
Botes, M. (2023). Autonomy And The Social Dilemma Of Online Manipulative
Behavior. AI and Ethics, 3(1), 315-323.
Boyacı Yıldırım, M., & Özgen, E. (2024). Dijitalleşme Ekseninde İnfodemi Ve
Bilgi Düzensizlikleri. Akademik Hassasiyetler, 12(25), 500-529.
Calo, R. (2013). Digital Market Manipulation. Geo. Wash. L. Rev., 82, 995.
24 | Algorithmic Manipulation: Influencing Consumer Behavior

Chaudhuri, A. B. (2020). Flowchart And Algorithm Basics: The Art Of Program-


ming. Mercury Learning and Information. New Delhi.
Chouaki, S., Bouzenia, I., Goga, O., & Roussillon, B. (2022). Exploring the
online micro-targeting practices of small, medium, and large businesses.
Proceedings of the ACM on Human-Computer Interaction, 6(CSCW2),
1-23.
Christiano, T. (2022). Algorithms, Manipulation and Democracy. Canadian
Journal Of Philosophy, 52(1), 109-124.
Cormen, T. H., Leiserson, C. E., Rivest, R. L., & Stein, C. (2009). Introduction
To Algorithms, Third Edition. The MIT Press
Coşkun Keskin, S., Karaloğlu, İ., & Erdoğan, H. (2023). Harezmi Eğitim Mo-
deli’nin Uygulayıcıları Olan Öğretmenlerin Görüşleri Doğrultusunda İn-
celenmesi. Ondokuz Mayis University Journal of Education Faculty, 43(1),
121-140.
Çalapkulu, Ç., & Buran, N. (2023). Dijital Pazarlama Bileşenlerinde Duygusal
Zeka Ve Big Datanın Önemi. İstanbul Ticaret Üniversitesi Girişimcilik Der-
gisi. 7(13). 138-154.
Çaycı, A. E. (2021). Sosyal Medya Platformlarının Kamusal Tartışmalardaki
Rolü: Filtre Balonu Ve Yankı Odası. Asead 7. Uluslararası Sosyal Bilimler
Sempozyumu Ejser 7th International Symposium On Social Sciences 10-
12 nisan/april 2021 kemer – antalya. [Link]
le/merve-gezen-3/publication/365520187_asead_7_uluslararası_sosyal_
bılımler_sempozyumu/links/6377d78854eb5f547ce30206/asead-7-ulus-
lararası-sosyal-bılı[Link]#page=922
Elitaş, T. (2022). Dijital Manipülasyon ‘Deepfake’teknolojisi Ve Olmayanın
İnandırıcılığı. Hatay Mustafa Kemal Üniversitesi Sosyal Bilimler Enstitüsü
Dergisi, 19(49), 113-128.
Erdoğan, Z. & Gürbüz, E. (2023). Enflasyon Dönemlerinde Pazarlama Kar-
ması Uygulamalarını Etkileyen Küçültme Enflasyonu (Shrinkflation) Ve
Nitelik Kaybı Enflasyonu (Skimpflation) Stratejileri. Journal of Politics
Economy and Management, 6(2), 1-20.
Ergül Güvendi, N. (2023). Manipülasyon Panoraması. ([Link] Baha Biçer, Or-
han ŞANLI). Sosyal, İnsan ve İdari Bilimlerde Güncel Yaklaşımlar. Duvar
Yayınları. İzmir.
Eryılmaz, H.(1999). Bir Kitle İletişim Aracı Olarak Haber Fotoğrafı ve Ma-
nipülasyon, Yayınlanmamış Doktora Tezi, Eskişehir, Anadolu Üniversitesi
Sosyal Bilimler Enstitüsü Basım ve Yayımcılık Anabilim Dalı.
Fırat, N. S. (2008). Savaş Fotoğraflarının Kullanımı Bağlamında Propaganda ve
Manipülasyon. Marmara Üniversitesi Güzel Sanatlar Enstitüsü, Yüksek
Lisans Tezi, İstanbul
Zeynep Erdoğan / Esen Gürbüz | 25

Finn, E. (2017). What Algorithms Want: Imagination In The Age Of Computing.


7 Massachusetts Institute of Technology. Campridge.
Fletcher, G. G. S. (2021). Deterring algorithmic manipulation. Vand. L. Rev., 74,
259.
Fu, H., & Sun, Y. (2024). Unravelling the algorithm manipulation behavior of
social media users: A configurational perspective. In Li, E.Y. et al. (Eds.)
Proceedings of The International Conference on Electronic Business, 24.
536-550.
Green, J., & Issenberg, S. (2016, October 26). Why the Trump machine is built
to last beyond the election. Bloomberg. [Link]
news/ articles/2016-10-27/inside-the-trump-bunker-with-12-days-to-go
Greiss, D. (2021). Addressing Digital Market Manipulation In Australian Law.
ANU Journal of Law and Technology, 2(2), 23-55.
Ienca, M. (2023). On Artificial İntelligence And Manipulation. Topoi, 42(3),
833-842.
Isinkaye, F. O., Folajimi, Y. O., & Ojokoh, B. A. (2015). Recommendation Sys-
tems: Principles, Methods And Evaluation. Egyptian İnformatics Journal,
16(3), 261-273.
Jago, E. (2022). Algorithmic Manipulation: How Social Media is Shaping our
Theology. Eleutheria: John W. Rawlings School of Divinity Academic Jour-
nal, 6(1), 9.
Jiaying, L & Lasi, M.A. (2023). Marketing Manipulatıon: A Literature Review
Of Its Antecedents, Mechanisms, Outcomes, And Moderators. Interna-
tional Journal Of Advances Research in Islamic Studies and Education
Arise. 3(4). 69-81.
Karaman, Ö. (2021). Yapay Zekâ Destekli Kişiselleştirme Algoritmalarının Tü-
ketici Zihninde Filtre Balonu Yaratma Etkisinin İncelenmesi. Süleyman
Demirel Üniversitesi Vizyoner Dergisi, 12(32), 1339-1351.
Karaöz Akın, B., & Gürsoy, Şimşek, U.T. (2018). Adaptif öğrenme sözlüğü te-
melli duygu analiz algoritması önerisi. Bilişim Teknolojileri Dergisi, 11(3),
245-253.
Kayıhan, B., Narin, B., Fırat, D., & Fırat, F. (2021). Algoritmalar, Yapay Zekâ
Ve Makine Öğrenimi Ekseninde Gazetecilik Etiği: Uluslararası Akademik
Dergilere Yönelik bir inceleme. TRT Akademi, 6(12), 296-327.
Kitchin, R. (2019). Thinking Critically About And Researching Algorithms.
In The social power of algorithms. Routledge. 14-29.
Li, X., & Li, K. J. (2023). Beating The Algorithm: Consumer Manipulation,
Personalized Pricing, And Big Data Management. Manufacturing & Ser-
vice Operations Management, 25(1), 36-49.
26 | Algorithmic Manipulation: Influencing Consumer Behavior

Li, W., Bao, L., Chen, J., Grundy, J., Xia, X., & Yang, X. (2024). Market Mani-
pulation Of Cryptocurrencies: Evidence From Social Media And Transa-
ction Data. ACM Transactions on Internet Technology, 24(2), 1-26.
Ljubičić, K., & Vukasović, T. (2023). Manipulation In The World Of Marke-
ting. Mednarodno Inovativno Poslovanje. Journal Of Innovative Business
And Management, 15(1), 1-11.
Lodziak, C. (2003). Kapitalizm ve Kültür, İhtiyaçların Manipülasyonu. (Trans-
lator: Berna Kurt), Çitlembik Yayınevi, İstanbul.
Maathuis, C., & Kerkhof, I. (2023). Social Media Manipulation Awareness Th-
rough Deep Learning Based Disinformation Generation. In International
Conference on Cyber Warfare and Security. 18(1). 227-236).
Maathuis, C., & Godschalk, R. (2023). Social Media Manipulation Deep Le-
arning based Disinformation Detection. In International Conference on
Cyber Warfare and Security 18 (1). 237-245.
McKee, P. C., Senthilnathan, I., Budnick, C. J., Bind, M. A., Antonios, I., &
Sinnott-Armstrong, W. (2024). Fear of Missing Out’s (FoMO) Relati-
onship With Moral Judgment And Behavior. PloS one, 19(11).
Michalak, J., & Stypiński, M. (2023). Methods of Manipulation Used in Adver-
tising. Olsztyn Economic Journal, 18(2), 195-206.
Miyazaki, S. (2012). Algorhythmics: Understanding Micro-Temporality In
Computational Cultures. Computational Culture [Link]
[Link]/algorhythmics-understanding-micro-temporality-in-compu-
tational-cultures/
Mucundorfeanu, M., Balaban, D. C., & Mauer, M. (2024). Exploring The Ef-
fectiveness Of Digital Manipulation Disclosures For Instagram Posts On
Source Credibility And Authenticity Of Social Media Influencers. Inter-
national Journal of Advertising, 1-31.
Özuz Dağdelen, E. (2024). Sosyolojik Bakış Açısıyla Veri Bilimcilerin Gözünden
Veri İhlali ve Veri Manipülasyonu Ayrımı. Sosyolojinin Geleceği ve Gele-
ceğin Sosyolojisi II. Ulusal Kongre Genişletilmiş Özet Bildiri Kitapçığı
Taslağı, 31-36.
Önder, D. (8 Şubat 2024). Algoritmalar ve Akış Diyagramı. [Link]. ht-
tps://[Link]/@onderrdogukan/algori%CC%87tmalar-ve-aki%-
C5%9F-di%CC%87yagrami-79229f88183d Access Date: 20.11.2024
Özkök, Ö. (2019). Sosyal Medyada Sanal Kimlikler; Sosyal Medya Fenomenleri-
nin Benlik Sunumları Üzerine Bir Araştırma. Yüksek Lisans Tezi. İstanbul
Kültür Üniversitesi, Eğitim Enstitüsü.İstanbul.
Putniņš, T. J. (2012). Market Manipulation: A Survey. Journal Of Economic Sur-
veys, 26(5), 952-967.
Zeynep Erdoğan / Esen Gürbüz | 27

Reaves, S., Bush Hitchon, J., Park, S. Y., & Woong Yun, G. (2004). If Looks
Could Kill: Digital Manipulation Of Fashion Models. Journal of Mass
Media Ethics, 19(1), 56-71.
Reuille-Dupont, J.C. (2023). The Power of Algorithms and Big Data: A Mar-
keting Perspective on Consumer Manipulation in Business. Portland
State University. [Link]
cgi?article=2540&context=honorstheses
Rickert, T. J. (2024). Ambient Engineering: Hyper-Nudging, Hyper-Relevan-
ce, and Rhetorics of Nearness and Farness in a Post-AI Algorithmic Wor-
ld. Rhetoric Society Quarterly, 54(5), 413-430.
Rohach, O. & Rohach, I. (2021). Manipulation and Persuasion in Business Ad-
vertising. Research Trends in Modern Linguistics and Literature, 4, 47-61.
Saurwein, F., & Spencer-Smith, C. (2021). Automated Trouble: The Role Of
Algorithmic Selection İn Harms On Social Media Platforms. Media and
Communication, 9(4), 222-233.
Sayılı, A. & Dosay, M. (1991). Hârezmi ile Abdülham id İbn Türk ve Orta As-
ya’nın Bilim ve Kültür Tarihindeki Yeri. Erdem Dergisi. 7(19). 101-214.
Senemoğlu, O. (2017). Tüketim, Tüketim Toplumu Ve Tüketim Kültürü: Kar-
şılaştırmalı Bir Analiz. İnsan Ve İnsan, 4(12), 66-86.
Shekhar, H. Seal, S., Kedia, S. & Guha, A. (2018). Survey on Applications of Mac-
hine Learning in the Field of Computer Vision. (Ed. Jyotsna Kumar Mandar
ve Debika Bhattacharya). Emerging Technology in Modelling and Grap-
hics. Springer. 667-678.
Singh, V., Vishvakarma, N. K., & Kumar, V. (2024). Unveiling Digital Manipu-
lation And Persuasion İn E-Commerce: A Systematic Literature Review
Of Dark Patterns And Digital Nudging. Journal Of Internet Commerce,
23(2), 144-171.
Strang, W. A., Lusch, R. F. & Laczniak, G. R. (2014). Consumer Manipulati-
on: Are Marke ters Building A Monster?. Venkatakrishna. (Ed. V. Bellur).
Proceedings of the 1980 Academy of Marketing Science. (AMS) Annual
Conference içinde USA: Springer. 248-253.
Susser, D., Roessler, B., & Nissenbaum, H. (2019). Technology, Autonomy,
And Manipulation. Internet Policy Review, 8(2).1-22.
Susser, D., Roessler, B., & Nissenbaum, H. (2019a). Online Manipulation:
Hidden İnfluences İn A Digital World. Geo. L. Tech. Rev., 4(1).
Şen, B. (2024). Pandemi Döneminde Dijitalleşmenin Tüketici Davranışlarına
ve E-Ticaret Stratejilerine Etkisi. Tokat Gaziosmanpaşa Üniversitesi Turhal
Uygulamalı Bilimler Fakültesi Dergisi, 2(2), 11-20.
Tan, P. L., Tjiptono, F., & Tan, S. Z. (2024). Fear More Or Fear No More: Exa-
mining The Emotional And Behavioral Consequences Of FOMO And
JOMO. Asia Pacific Journal of Marketing and Logistics.1-22.
28 | Algorithmic Manipulation: Influencing Consumer Behavior

Tekke, A., & Lale, A. (2021). Sosyal Medyada Etik, Bilgi Manipülasyonu ve
Siber Güvenlik. Akademik İncelemeler Dergisi, 16(2), 44-62.
Yılmaz, Y., & Tatoğlu, M. F. (2024). Televizyon Programlarında İzleyici İhtiyaç-
larının Manipülasyonu: MasterChef Türkiye Analizi. Kocaeli Üniversitesi
İletişim Fakültesi Araştırma Dergisi, (24), 61-79.
Yu, P., Xia, Z., Fei, J., & Lu, Y. (2021). A Survey On Deepfake Video Detection.
Iet Biometrics, 10(6), 607-624.
Yurtsever, A.E. (2023). Manipülatif Pazarlamanın Z Kuşağının Davranışsal Ni-
yetleri Ve Tüketim Alışkanlıklarına Etkileri. (Ed. Murat Akın). İSAD
Publising House. [Link]
Yurtsever, A. E., & Murat, A. (2022). Cep Telefonu Şirketlerinin Kullandıkları
Manipülatif Satış Tekniklerinin Z Kuşağındaki Tüketicilerin Davanışsal
Niyetleri Ve Tüketim Alışkanlıkları Üzerindeki Etkisi. Social Science De-
velopment Journal (Ssd Journal), 7(33), 257-283.
Quinelato, P. D. (2024). Consumer Manipulation Through Behavioral Adverti-
sing: Regulatory Proposal By The Data Services Act. Brazilian Journal of
Law, Technology and Innovation, 2(1), 1-24.
Wertenbroch, K., Schrift, R. Y., Alba, J. W., Barasch, A., Bhattacharjee, A., Gies-
ler, M., & Zwebner, Y. (2020). Autonomy İn Consumer Choice. Marke-
ting letters, 31, 429-439.
[Link]. Algortima. [Link] Access
Date: 03.01.2025
Witte, J. (2024). Consumer Manipulation–A Definition, Classification And Fu-
ture Research Agenda. Journal Of Information, Communication and Et-
hics in Society. doi/10.1108/JICES-09-2023-0119
Witzenberger, K.(2017). What Users Do to Algorithms. Media and Communi-
cation Studies, Lund University
[Link]
Web_version_RR_2017_1_2_.pdf#page=19
Vangeli, M. (2023). The Philosophy of Algorithmic Manipulation: Unveiling the
Influence of Social Media Algorithms. Uppsala University, Disciplinary
Domain of Humanities and Social Sciences, Faculty of Arts, Department
of Philosophy. [Link]
[Link] Erişim Tarihi:04.01.2024
Vukasović, T., & Ljubičić, K. (2022). Marketing Manipulation in the 21st Cen-
tury. In 5 th International Scientific Conference ITEMA 2021–Conferen-
ce Proceedings.
Chapter 3

Privacy and Artificial Intelligence

Engin Yavuz1

Abstract
The areas of use of artificial intelligence are constantly increasing. It
has started to play a major role especially for brands to increase sales by
influencing consumers. In an increasingly competitive environment, it is
vital to reach more consumers and increase sales rates. Consumers prefer
the option they find closer to themselves among brands due to easy access to
products and services and the abundance of options. Therefore, the demand
for personalized products and services is increasing day by day. Brands also
need more information to offer personalized products to consumers. It is very
difficult to access consumers’ personal data and make meaningful inferences
from them, especially with large amounts of data. Artificial intelligence makes
the work of brands much easier at this point. Consumer data, most of which
is collected on the internet, can be analyzed with artificial intelligence and
meaningful information can be obtained. Recently, however, there have been
some concerns about the collection and use of consumer data. Personal data
may be improperly shared with third parties and may lead to legal problems.
Artificial intelligence developers should ensure that personal data is collected
and used in line with principles such as transparency, fairness, accountability,
privacy and security. The establishment of an effective control system and the
sanctions to be taken against violations should be clearly stated. Providing
the necessary support from governments and ensuring global cooperation
can contribute to the reduction of personal data breaches. In this section,
within the scope of privacy and artificial intelligence, the definition of
artificial intelligence, the privacy of consumers and artificial intelligence, the
views of the European Data Protection Board on the use of personal data,
the views of the OECD on the use of personal data, the Council of Europe’s
ethical principles on artificial intelligence are examined under the headings.

1 Dr. Amasya Üniversitesi, engin84yavuz@[Link], [Link]

[Link]
29
30 | Privacy and Artificial Intelligence

Privacy and Artificial Intelligence

1. Introduction
In recent years, the number of consumer concerns about ethical issues in
online shopping has continued to grow. Privacy and security are considered
by consumers and researchers to be the most important ethical violations
(Román and Cuestas 2008). The possibility of artificial intelligence
violating privacy is increasing day by day. Although privacy is considered a
fundamental right, it should be taken into account that artificial intelligence
may lead to violations of privacy protection (Akyol and Özkan, 2023: 122).
Digital technology environments are being used more and more in daily
life. However, it has become possible to talk about the privacy of private
life with the data collected, archived, analyzed and interpreted as a result of
artificial intelligence consisting of platforms, software, codes and algorithms
(Wilson, 2016). With these developments, it can be considered that artificial
intelligence prepares the ground for revealing personal freedoms and privacy
(Gül, 2018: 20).
Technological developments bring about changes in daily life. Artificial
intelligence, which is one of them, has risks affecting human rights as well as
areas where it is beneficial. Especially in areas such as trade, communication
and cybercrimes, malicious use is also possible. This situation prompts
politicians, states and artificial intelligence developers to develop new
strategies (Dost, 2023: 1275).
Having data, which is the basis of artificial intelligence, also brings
power. In today’s increasingly competitive environment, it is critical for both
private sector organizations and governments to gain power. Therefore,
governments and private organizations may want to access personal data as
soon as possible and use it for their own interests. By capturing the personal
data of the target audience, they can use it to market products and services in
accordance with the wishes of consumers and to increase their sales. When
this data is not used properly, it may mean a violation of consumers’ privacy
(Varkonyi, 2018: 3).
Artificial intelligence technologies can easily obtain sensitive personal data
by processing data that is not considered sensitive. In other words, artificial
intelligence technologies can cause violation of private life by converting
non-sensitive data into sensitive data. Personal data can be extracted through
social media platforms such as Facebook, Instagram, etc. and consumers’
virtual identities can be captured. With this data, consumers’ interests,
Engin Yavuz | 31

tendencies and expectations can be learned and used in commercial activities


(Abudureyimu and Oğurlu, 2021: 771).
In this section, within the scope of privacy and artificial intelligence, the
definition of artificial intelligence, the privacy of consumers and artificial
intelligence, the views of the European Data Protection Board on the use
of personal data, the views of the OECD on the use of personal data, the
Council of Europe’s ethical principles on artificial intelligence are examined
under the headings.

2. Artificial Intelligence
There is not yet an agreed definition of artificial intelligence. However,
according to the definition by the European Commission of Human Rights,
artificial intelligence is “used as an umbrella term to refer to a set of sciences,
theories and techniques dedicated to improving the ability of machines to do
things that require intelligence. An artificial intelligence system is a machine-
based system that produces recommendations, predictions or decisions for a
given set of goals (European Commission of Human Rights, 2019)”.
The concept of artificial intelligence should be characterized as an
algorithm-supported artificial machine that can learn in a complex and
variable field, make decisions, influence those around it, and transfer the
information and decisions it obtains to users, that is, an entity with the
ability to think (Gezici, 2023: 112).
If we define artificial intelligence briefly, it is a technology-based system
that analyzes simultaneous product and service simulations using data
obtained from both digital and physical channels and makes personalized
recommendations to solve consumers’ complex problems and answer their
questions, enabling them to decide between options (Xu et al., 2020).

3. Consumer Privacy and Artificial Intelligence


Artificial intelligence uses algorithms to calculate the probabilities
that are likely to occur using the data they obtain and tries to gain useful
information as a result. For example, artificial intelligence algorithms are
used extensively in areas such as tourism, logistics, retail and e-commerce
to monitor competitors’ prices and determine price policies accordingly,
to determine consumer preferences and to analyze the data obtained (Öz,
2020: 40).
Artificial intelligence also makes extensive use of consumers’ personalized
data in shopping and entertainment. Amazon’s purchase recommendations,
32 | Privacy and Artificial Intelligence

Netflix’s efforts to direct the audience, prioritizing consumers in terms of


content and making consumer-specific recommendations. At this point,
consumers evaluate what to buy or not to buy, which product or service will
be beneficial for them through artificial intelligence rather than individual
thoughts (Eltimur, 2022: 578).
Artificial intelligence in marketing is frequently used in areas such as
analyzing consumer behavior, consumer experience, providing personalized
products and services, and managing consumer relationships quickly and
effectively before, during and after sales. In consumer research, abilities
such as understanding, speech and cognitive abilities are performed by the
algorithms of artificial intelligence (Huang & Rust, 2022: 210).
With the increasing use of social media, people share their daily lives
and experiences on these platforms. These shares are increasing day by day.
Large amounts of data can reach many people simultaneously with the
internet. Consumers’ personal information, what they like and dislike, and
their thoughts are very important for marketing practitioners. It becomes
difficult to process large amounts of data into meaningful information.
Artificial intelligence is one of the most effective ways to turn difficult into
easy (Binbir, 2021: 315).
However, in addition to these benefits, artificial intelligence may use
personal data improperly to increase consumer satisfaction. The information
obtained through digital traces while browsing the internet and the
advertisements developed through this information are just one of the many
violations in the field of artificial intelligence. The large amount of data
collected from consumers also raises issues related to their private lives. These
problems are not only related to data management. In addition, directing
consumer preferences and encouraging them to buy the products and
services they want is one of the steps that restrict consumers’ freedom. For
example, with the consumer data obtained, artificial intelligence algorithms
can identify certain consumers and manage their perceptions (Gonçalves et
al., 2023: 315).
Aleksandr Kogan, a researcher at the University of Cambridge, collected
users’ personal data on Facebook without their consent and transmitted it
to Cambridge Analytica. The personal data obtained targeted consumers
through advertising. After the personal data breach was revealed, the
company was shut down. In order to prevent this vulnerability, Facebook
blocked Cambridge Analytica’s access and launched an investigation into
applications that similarly had access to personal data. It also restricted third-
party developers from accessing personal profiles (The Guardian, 2018).
Engin Yavuz | 33

4. Various Organizations’ Views on Privacy and Artificial


Intelligence
Although obtaining personal data provides great advantages to brands,
the use and sharing of data without taking the necessary privacy and security
measures can harm consumers as well as brands (Danışman, 2023: 161). In
this section, the views of the European Data Protection Board, the OECD
and the Council of Europe on the use of personal data are given.

4.1. Opinions of the European Data Protection Board on the Use


of Personal Data
On December 18, 2024, the European Data Protection Board (EDPB)
issued an opinion on the use of personal data in the use of artificial intelligence.
According to this opinion (European Data Protection Board, 2025):
1) When and how AI models will be considered anonymous:
Whether an AI model is anonymous or not depends on the decision of
countries’ data protection authorities and may need to be assessed on a case-
by-case basis. For an AI model to be considered anonymous;
a) Direct or indirect identification of the persons whose data are used in
the creation of the artificial intelligence model.
b) It is necessary to prevent personal data from being obtained from the
artificial intelligence model through querying.
2) Procedures required to develop or use artificial intelligence models:
A representative should be provided by AI developers so that users
can communicate when necessary and necessary measures should be taken
to increase cyber security. These measures can be beneficial for users and
provide legal protection, but only if the processing of personal data is truly
necessary and personal rights are respected.
3) Unlawful development of artificial intelligence:
Unless artificial intelligence models are duly anonymized, the use of
personal data may be unlawful.

4.2. OECD’s Views on the Use of Personal Data


According to the report published by the OECD, there is a need for
global coordination to solve the problems related to the data used by artificial
intelligence. In this report, six issues are highlighted in order to harmonize
the developments in the field of artificial intelligence with privacy principles
(Maxwell, et. al., 2024). These are (Maxwell, et. al., 2024):
34 | Privacy and Artificial Intelligence

1) Privacy in the use of artificial intelligence


Complaints about privacy violations in the use of artificial intelligence are
increasing and the measures to be taken to address this issue are important.
Particular attention needs to be paid to privacy in the conceptualization,
development and deployment phases of artificial intelligence actions. It is
important to comply with privacy rules from the early stages of AI development
and design, and to make proactive efforts to close gaps in implementation.
Building bridges between societies, making privacy a permanent policy in
AI development and supporting privacy-oriented innovation are among the
critical issues (Shrestha and Joshi, 2025; Mooradian et. al, 2025; Maxwell,
et. al., 2024).
2) Cooperation between communities
Terminological and conceptual misunderstandings in AI privacy and
policies can lead to ambiguities. Therefore, it is important to build sustainable
interactions between AI communities. In this way, terminological and
conceptual harmonization between AI communities on privacy-related
issues can be achieved and contribute to the development of AI (Welsh et
al., 2024; Maxwell, et. al., 2024).
3) Justice
It is very important for AI models to process personal data fairly and
reach conclusions in terms of privacy principles of AI. Principles such as
limitation of data collection, purposefulness, openness and quality of data
collected are critical to ensure fairness (Verma et al., 2024; Maxwell, et. al.,
2024).
4) Transparency and accountability
Obtaining consent when processing individuals’ data and informing
them about how it is used is one of the most important issues in artificial
intelligence development. As transparency increases, so does trust, making
it easier for users to make informed decisions. Practical solutions such as
model cards can be produced to ensure that the information provided by AI
is understandable and meaningful (Cheong, 2024; Maxwell, et. al., 2024).
5) Accountability
It is also important to be able to integrate privacy and risk management
principles into AI applications at the design stage and, as a result, to be
accountable and comply with the laws of countries. Deep privacy detection
programs to detect privacy can also help prevent breaches (Moch, 2024;
Maxwell, et. al., 2024).
Engin Yavuz | 35

6) Global cooperation
Global synchronization, guidance and collaboration are needed to help
AI mitigate privacy concerns. While there have been improvements in
global collaboration, increased efforts can help prevent privacy violations. In
addition, Privacy Enhancing Technologies (PETs) can bridge this gap to a
large extent to help with data management and privacy safeguards (Al-Billeh
et al., 2024; Maxwell, et. al., 2024).

4.3. Council of Europe Ethical Principles on Artificial Intelligence


According to the guidelines published by the Council of Europe in 2019,
a trustworthy AI must meet the following 7 basic requirements (European
Comission, 2019):
1) Oversight of human activities:
AI should empower individuals and support them in decision-making. In
addition, it is important that artificial intelligence systems are controllable.
2) Technical infrastructure and security:
Artificial intelligence systems need to be strong and stable in terms of
infrastructure. It is also critical to prevent security breaches.
3) Privacy and data management:
Artificial intelligence systems should provide legitimate access to data
and pay due attention to its quality and confidentiality.
4) Transparency:
Artificial intelligence applications should follow a transparent management
policy. Users should be informed about whether they are interacting or not.
In addition, users should be informed about the capabilities and limitations
of artificial intelligence applications.
5) Fairness:
AI models should treat users equally and fairly. It should be ensured that
all users can access AI models without any discrimination.
6) Social contribution:
Artificial intelligence models should benefit all of humanity, with future
generations in mind. They should also be respectful of the environment and
take care to provide social benefits.
36 | Privacy and Artificial Intelligence

7) Responsibility:
Mechanisms are needed for artificial intelligence applications to fulfill
their responsibilities and to be accountable within the framework of these
responsibilities. it is also important that the design processes of the data can
be audited.

Sonuç
Artificial intelligence has been used extensively in marketing in recent
years. Due to increased competition, businesses want to increase their
market share or at least maintain their status. Consumers have many options
when making purchasing decisions. Considering that consumers want to
choose the brands that provide the most suitable personalized products/
services with the increase in their alternatives, businesses may have to
benefit from artificial intelligence in order to fulfill the wishes of consumers.
Artificial intelligence can collect consumer data, analyze it, and then make it
available to marketing practitioners. Collecting, analyzing and transforming
consumer data into useful information is a complex and difficult process
that can only be accomplished through AI-enabled applications with current
technology (Neves and Pereira, 2025).
Artificial intelligence applications should pay attention to legal
regulations when obtaining consumer data and take the necessary measures
for the use of personal data. When collecting personal data, it should be
transparently explained to consumers what data is collected and for what
purpose. Businesses should not ignore the principle of transparency in order
to gain the trust of consumers. In addition to affecting consumer trust,
breach of personal data may put businesses in a difficult situation in front of
the law. Therefore, businesses that follow an artificial intelligence-supported
marketing strategy should pay close attention to the security of personal data
and the privacy of private life (Kır, 2024: 71).
Artificial intelligence applications used by businesses must fulfill ethical
and legal responsibilities when collecting personal data from consumers.
Inappropriate collection of personal data and sharing it with third parties
without permission may lead to personal data breach and may result in legal
sanctions. Ensuring global cooperation and the necessary sanctions by states
by taking measures against personal data breach can prevent the improper
collection of personal data breach. In addition, artificial intelligence
developers should have the ability to communicate with consumers when
necessary and build infrastructure systems to prevent the improper collection
of personal data (Muvva, 2025).
Engin Yavuz | 37

Kaynakça
Abudureyimu, Y. ve Oğurlu, Y. (2021). Yapay zekâ uygulamalarının kişis-
el verilerin korumasına dair doğurabileceği sorunlar ve çözüm önerile-
ri. İstanbul Ticaret Üniversitesi Sosyal Bilimler Dergisi, 20(41), 765-782,
doi:10.46928/iticusbe.863505
Al-Billeh, T., Hmaidan, R., Al-Hammouri, A., & Makhmari, M. (2024). The
risks of using artificial ıntelligence on privacy and human rights: unif-
ying global standards. Jurnal Media Hukum, 31(2), 333–350. https://
[Link]/10.18196/jmh.v31i2.23480
Akyol, İ.T. ve Özkan, N.A.Ş. (2023). Yapay zeka uygulamalarinin yerel hizmet
sunumuna etkisi, Tokat Gaziosmanpaşa Üniversitesi Sosyal Bilimler Araştır-
maları Dergisi, 120-134.
Avrupa İnsan Hakları Komisyonu. (2019). Unboxing artificial intelligence: 10
steps to protect human rights, Erişim Adresi: chrome extension://efaid-
nbmnnnibpcajpcglclefindmkaj/[Link] in-
telligence-10-steps-to-protect-human-rights-reco/1680946e64 Erişim
Tarihi: 16.02.2025
Binbir, S. (2021). Pazarlama çalışmalarında yapay zeka kullanımı üzerine betim-
leyici bir çalışma, Yeni Medya Elektronik Dergisi, 5(3), 314-328.
Cheong, B. C. (2024). Transparency and accountability in AI systems: safegu-
arding wellbeing in the age of algorithmic decision-making. Frontiers in
Human Dynamics, 6. [Link]
Danışman, G.T. (2023). Artificial Intelligence in Terms of Digital Marketing:
A Conceptual Examination. In: Fettahlıoğlu, H. S. & Bilginer Ozsaatcı,
F. G. (eds.), Digital Transformation of Marketing: Marketing 5.0. Özgür
Publications. DOI: [Link]
Dost, S. (2023). Yapay zekâ ve uluslararası hukukun geleceği’ 13(2) Süleyman
Demirel Üniversitesi Hukuk Fakültesi Dergisi 1271, 1313.
Eltimur, D.E. (2022). İnsan haklarının korunması bağlamında yapay zekâ uygu-
lamaları, Akdeniz Üniversitesi Hukuk Fakültesi Dergisi, 12(2), 559-594.
European Comission. (2019). Ethics guidelines for trustworthy AI, Erişim
Adresi: [Link]
lines-trustworthy-ai, Erişim Tarihi: 18.02.2025
European Data Protection Board. (2025). Erişim Adresi: [Link]
[Link]/news/news/2024/edpb-opinion-ai-models-gdpr-principles-sup-
port-responsible-ai_en, Erişim Tarihi: 20.02.2025
Gezici, H.S. (2023). Kamu yönetiminde yapay zekâ: Avrupa Birliği, Uluslara-
rası Akademik Birikim Dergisi, 6(2), 111-128.
38 | Privacy and Artificial Intelligence

Gonçalves, A. R., Pinto, D. C., Rita, P. ve Pires, T. (2023). Artificial Intelli-


gence and its ethical implications for marketing, Emerging Science Jour-
nal, 7(2), 313-327. doi: 10.28991/ESJ-2023-07-02-01
Gül, H. (2018), Dijitalleşmenin kamu yönetimi ve politikaları ile bu alanlardaki
araştırmalara etkileri, Yasama Dergisi, 36, 5-26.
Huang, M.H. ve Rust, R.T. (2022). A framework for collaborative artificial
intelligence in marketing. Journal of Retailing, 98(2), 209–223.
Kır, B. (2024). AI Pazarlama: Yeni Bir Paradigma, (Ed. M. Nur Erdem, Cansu
Maya), Marka İletişimi Tüm Boyutlarıyla Marka ve Tüketici Etkileşimi
(içinde), Palet Yayınları: Konya
Öz, G.A. (2020). Corona (Covid 19) gölgesinde geleceğin hukuku ve dijital
ekonomi çağinda rekabet hukuku, Ankara Üniversitesi Hukuk Fakültesi
Dergisi, 69 (1), 33-56.
Shrestha, A. K., & Joshi, S. (2025). Toward Ethical AI: A Qualitative Analysis
of Stakeholder Perspectives. [Link]
Maxwell, W., Girot, C. ve Perset, K. (2024). Six crucial policy considerations
for AI, data governance, and privacy: Insights from the OECD, Erişim
Adresi: [Link]
nance-and-privacy, Erişim Tarihi: 19.02.2025
Moch, E. (2024). Liability Issues in the Context of Artificial Intelligence: Le-
gal Challenges and Solutions for AI-Supported Decisions. East African
Journal of Law and Ethics, 7(1), 214–234. [Link]
eajle.7.1.2518
Mooradian, N., Franks, P. C., & Srivastav, A. (2025). The impact of artificial
intelligence on data privacy: a risk management perspective. Records Ma-
nagement Journal. [Link]
Muvva, S. (2025). Ethical AI and responsible data engineering: A framework
for bias mitigation and privacy preservation in large-scale data pipeli-
nes. Indian Scientific Journal of Research In Engineering and Manage-
ment, 09(01), 1–8. [Link]
Neves, J. ve Pereira, M. C. (2025). A marketing perspective on the roles of AI
and ML in shaping contemporary programmatic advertising. Internati-
onal Journal of Digital Marketing, Management, and Innovation (IJDM-
MI), 1(1), 1-14. [Link]
Román, S., Cuestas, P., 2008. The perceptions of consumers regarding online
retailers’ ethics and their relationship with consumers’ general Internet
expertise and word of mouth: a preliminary analysis. Journal of Business
Ethics, 83, 641–656.
The Guardian. (2018). Cambridge Analytica closing after Facebook data
harvesting scandal, Erişim Adresi: [Link]
Engin Yavuz | 39

uk-news/2018/may/02/cambridge-analytica-closing-down-after-face-
book-row-reports-say, Erişim Tarihi: 15.02.2025.
Xu, Y., Shieh, C., Van Esch, P. ve Ling, I. (2020). AI customer service: Task
complexity, problem-solving ability, and usage intention, Australasian
Marketing Journal, 28(4), [Link].1016/[Link].2020.03.005
Varkonyi, G.G. (2018). Robots with AI: Privacy considerations in the era of ro-
botics. Law 4.0 1-12, Györ: Széchenyi István University, Erişim Adresi:
[Link]
derations_in_the_Era_of_Robotics Erişim Tarihi: 17.02.2025.
Verma, S., Paliwal, N., Yadav, K., & Vashist, P. C. (2024). Ethical Conside-
rations of Bias and Fairness in AI Models, 2nd International Conference
on Disruptive Technologies (ICDT), 818–823. [Link]
icdt61202.2024.10489577
Welsh, C., Román García, S., Barnett, G. C., & Jena, R. (2024). Democra-
tising artificial intelligence in healthcare: community-driven approaches
for ethical solutions. Future Healthcare Journal, 11(3), 100165. https://
[Link]/10.1016/[Link].2024.100165
Willson, M. (2016). Algorithms (and the) Everyday, Information, Communi-
cation & Society.
40 | Privacy and Artificial Intelligence
Chapter 4

Consumer Distrust: Non-Transparent AI


Decision-Making Processes

Murat Başal1

Abstract
Today, artificial intelligence (AI) has emerged as a pivotal technology that
influences decision-making processes across various domains, including
digital marketing and customer relations. However, the lack of transparency
in AI systems, coupled with their failure to provide consumers with adequate
clarity regarding their decision-making processes, significantly contributes
to consumer distrust. This uncertainty, often referred to as the “black box
consumers to adopt a skeptical and cautious attitude toward brands and the
services they offer.
Non-transparent AI decision-making processes can lead to consumer
distrust. Factors such as perceived injustice, data privacy concerns, and ethical
uncertainties are believed to undermine consumer confidence, ultimately
affecting purchasing decisions. Consumers increasingly demand transparency,
auditability, and adherence to ethical principles to trust automated decision-
making systems. Therefore, it is essential for brands to embrace the principle
of transparency, enhance the comprehensibility of AI-based systems, and
elevate their ethical responsibilities to restore consumer trust.
The growing prevalence of AI-based decision-making processes presents
numerous opportunities for consumers; however, it also raises significant
trust issues. Non-transparent AI systems impede consumers’ ability to
comprehend how decisions are made, resulting in diminished levels of trust.
Consumers often struggle to understand how artificial intelligence (AI)
operates and the criteria it employs to make decisions. AI systems can yield
unfair or discriminatory outcomes due to biases present in the datasets
utilized. Furthermore, uncertainties surrounding the use of personal data
erode consumer trust in AI-based systems.

1 Assist Prof., Istanbul Gelisim University, mbasal@[Link],


ORCID: 0009-0004-5666-9560

[Link]
41
42 | Consumer Distrust: Non-Transparent AI Decision-Making Processes

AI can sometimes mislead consumers by relying on inaccurate or incomplete


information. To address these challenges, it is essential to enhance
transparency policies, develop explainable AI models, and implement
regulations that safeguard consumer rights. By taking these measures, we can
bolster consumer trust in AI-based systems.

1. Entry
Today, the consumption habits of individuals have started to be changed
with algorithms and analytical solutions developed with artificial intelligence
Technologies (Acar & Tanyıdızı, 2022). Thanks to these algorithms, it is
possible to direct people to products that they do not actually need as if
they were essential needs and to change their consumption habits. In
addition, it is clear that artificial intelligence will play a very important role
in gaining customers who have a positive experience, are more satisfied,
and whose loyalty and satisfaction are ensured as a result of functions
such as personalization, real-time sentiment analysis, decision-making and
harmonization (Bayuk & Demir, 2019; Biçkin et al., 2021).
In recent years, consumers have been doing most of their shopping on-
line. Artificial intelligence is used in these shopping sites and social media
platforms, and this arouses interest and curiosity in consumers. In response
to this interest, virtual assistants and chatbots supported by artificial
intelligence technologies are changing people’s decisions to search, evaluate
and purchase information with personalized recommendations. However,
the way people consume goods and services is guided by artificial intelligence
(Akbaba & Gündoğdu, 2021; Aylak et al., 2021).

2. Conceptual Framework

2.1. Consumer Insecurity


Consumer confidence appears as an important element in the success
of the relational marketing approach. Enterprises need to constantly obtain
information about consumer expectations, which are constantly changing
and differentiating. Improving relations with consumers also brings positive
economic results in the long term. Trust is subjective as it is based on the
beliefs and behaviors of consumers. Consumers feel loyalty to brands that
give them confidence. In order to build trust, both the supplier and the
buyer need to fulfill their promises. A stable brand personality and the nature
of the products or services will increase the brand’s credibility by reducing
the emotional risk that buyers experience when making purchases (Binbir,
2021; Borgesius, 2017).
Murat Başal | 43

The main purpose of marketing is to create a deep bond between the


consumer and the brand, and the most important element of this bond is
trust. Trust is considered the most critical feature a brand can have, while
at the same time it forms the cornerstone of the relationship and is seen as
one of the most sought-after qualities in a relationship (Sucu, 2019; Şahinci,
2021). The existence of trust increases consumers’ loyalty and commitment
to the brand, which supports the sustainable success of the brand in the
long term. Consumer skepticism is often used to refer to consumers’ distrust
of marketers’ intentions, specific advertising claims, and public relations
efforts. This concept reflects consumers’ developing a negative attitude
towards marketing communications and their skepticism of marketers’
actions (Şalvarlı & Kayışkan, 2022).
This process leads to the development of trust and satisfaction. Especially
when the belief that the brand supports consumers increases and expectations
are met, a bond forms between the buyer and the supplier decently (Zengin,
2020). Trust is a value that should be protected for businesses. In the event
of any crisis that businesses will experience, trust in the brand will be shaken,
as well as other brand components will be affected by this situation (Erdem,
2022; Zengin, 2021).
Trust in the business is the basis of the relationship between the supplier
and the buyer. Consumers tend to buy products or services from a trusted
business. As a result of the research conducted, buying products from brands
that give confidence increases the consumer’s motivation. Thus, in the
process of purchasing products or services, trust appears as a risk-reducing
factor (Binbir, 2021; Borgesius, 2017).
Consumer trust refers to the belief that consumers have in a business,
brand or seller and the feelings of security and trust in their interactions
with these parties. This concept affects the trust that consumers have in
the other party during product or service purchases and how this trust is
formed. Consumer trust plays a critical role in businesses achieving positive
results such as customer loyalty, repeat purchase behaviors and positive
word-of-mouth marketing. Consumer trust refers to the level of trust and
belief that consumers have in a brand. Brand trust is positively correlated
with brand loyalty and is defined as “the average consumer’s willingness to
trust the ability of the brand to fulfill its stated function”. With the trust that
a brand has gained from consumers, consumers are generally less inclined to
question and tend to interact with the brand without any doubt (Şalvarlı &
Kayışkan, 2022).
44 | Consumer Distrust: Non-Transparent AI Decision-Making Processes

Therefore, consumers who trust a brand are more likely to be exposed to


greenwashing practices because consumers feel less risk and have less doubt.
Abuse of consumer trust can occur by presenting misleading information or
hiding information that has environmental impacts. When fraud is detected,
consumer trust and positive perception of the consumer may be damaged
(Sucu, 2019; Şahinci, 2021).
While gains are made as long as trust continues, losses may occur when
trust ends. The fact that trust ends in a relationship damages reputation is a
threatening situation. Therefore, in a trust relationship based on calculation,
the fear of loss is a greater determinant than the benefit that the gain will
bring (Binbir, 2021; Borgesius, 2017).
In a trust relationship based on information, the cooperation of the other
person is not necessary for the formation of trust. Predictability is sufficient
for the formation of trust. Trust based on information; It is formed over
time with the development of communication and relationships.
In trust based on identification, it is the case that the parties understand
and appreciate each other’s desires well. In line with this common
understanding, people work for each other’s benefit. The parties can replace
each other and are sure that their own interests are protected without the
need for control (Uma et al. 2020; Topoyan, 2020).
Artificial intelligence (AI)-based systems guide decision-making processes
in many areas of daily life. However, the transparency of these systems causes
a significant sense of insecurity in his personality. The storage of data on
how artificial intelligence stores its decisions, what data is distributed, and
how fair the data is, makes it difficult for consumers to secure these systems
(Erdem, 2022; Zengin, 2021).
Developing a new product or improving an existing product offers a
competitive advantage to businesses and brands in terms of identifying
potential customers and satisfying existing customers. On the other hand,
the products developed or improved provide performance increases with the
use of technology (Zengin, 2020). By predicting the physical and emotional
behaviors of individuals, the process of developing new products provides less
time, less effort and higher success by using artificial intelligence algorithms.
In addition, artificial intelligence applications enable higher quality, more
relevant and more personalized product and service delivery to the user by
providing customization and personalization and hyper-personalization on
products and services (Zengin, 2021).
Murat Başal | 45

Thus, it provides an advantage in terms of providing customer satisfaction.


The value that the consumer is willing to pay to own or use any good,
service or idea is defined as the price. The price determined according to the
direct relationship between cost and profit directly affects the stability of the
enterprises, the value and quality of the services, products and ideas offered
Decently.

2.1.1. Factors Affecting Consumer Confidence


Trust has always been an important element in influencing consumer
behavior. In an uncertain environment such as internet-based e-commerce
transactions, the issue of trust becomes even more important. Consumers
may differ in terms of their tendency to trust and their willingness to trust
(Ton & Su, 2018). The tendency or willingness to trust is influenced by
consumers’ awareness of internet fraud and their past experiences with both
the internet and other risky situations. From a marketing point of view, trust
is important in terms of customer relationship management. For this reason,
over the past decade, the issue of trust has become an important topic in
consumer behavior research (Toufailiy et al. 2013; Tsiotsou, 2016).
While trust depends on face-to-face personal relationships in traditional
commerce, transaction processes are the most important factor in building
trust in e-commerce businesses. The key to success in e-commerce businesses
is to create an environment where businesses can be confident in all
transactions for consumers and to establish reliable transaction processes
(Ton & Su, 2018). To gain consumer trust, e-commerce businesses must
convince potential consumers that the information obtained through
commercial transactions will remain confidential. E-commerce businesses
use various security mechanisms to increase their perceived credibility. These
include privacy policy notices, third-party certification programs, the quality
of website design, consumer testimonials or reviews, recommendations from
reference groups, and money-back guarantees. The trust that a consumer
attributes to a product or brand image is based on their experience with that
brand (Briley & Aaker, 2006).
Therefore, trust as an experience characteristic will be influenced by the
consumer’s assessment of their direct and indirect contact with the product/
brand (advertising, word-of-mouth, brand reputation) (Bianchi & Mathews,
2016). Consumer trust in e-commerce is highly dependent on feedback
mechanisms such as consumer perceptions and consumer evaluations. Online
consumer ratings are typically offered by previous consumers of the product
46 | Consumer Distrust: Non-Transparent AI Decision-Making Processes

or service who rate their experience on a scale (ranging from “one star - bad
experience” to “five stars - excellent experience”) (Bravo et al. 2007).
Gaining consumers’ trust has long been considered one of the important
issues by marketers. This situation requires businesses to take various risks
(such as developing new products, providing support services and entering
new markets) in order to increase their market and financial performance
(Chen, 2018). However, recent studies have clearly shown that consumer
trust is based in part on ethical considerations related to the business’s
marketing activities. This is because, compared to other business functions,
marketing is more exposed to external environmental forces and therefore
faces some of the greatest ethical challenges (Darley et al. 2010; Davis,
2017).
In shopping, trust refers to the feelings of one party to the reliability
and honesty of the other party. Trust reduces the uncertainty that exists in
the present. Therefore, trust is important in influencing consumers’ fears of
deception and uncertainty in trading (Ginosar & Ariel, 2017; Gregory et al.
2017).
In addition to a person’s propensity towards trust and other characteristics,
general economic, demographic and geographical factors also influence
the tendency to make online payments (Hallikainen & Laukkanen, 2018).
Today, while millions of consumers are shopping online, it is an important
topic of discussion how much the number of consumers who will shop
online will increase when full trust is provided in the security system (Ha &
Stoel, 2012; Hagberg et al. 2016).
Trust is sensitive and subjective because it is based on consumers’ beliefs
rather than facts. To build trust, suppliers need to keep their promises (Hanna
et al. 2019). A consistent brand personality will reduce the emotional risk
that buyers experience when they buy a brand and increase credibility in the
nature of the goods or services features (Falk & Hagsten, 2015; Fang et al.
2014).
Building relationships of trust is a challenge that may require e-commerce
businesses to go beyond nonprofit thinking to set themselves apart from
their competitors (Hasan, 2010; Henseler et al. 2015).
Talent describes the consumer’s belief that the relationship partner
e-commerce site has the necessary capabilities to perform the job efficiently
and effectively (Chen, 2018). This can also be broadly defined as a set of
skills and traits in a particular field. Aptitude is also called competence and
involves the belief that someone else has the ability to perform what is
Murat Başal | 47

expected. This information reduces uncertainty in e-commerce. Talent is the


feature that expresses that consumer needs can be met by the e-commerce
business (Darley et al. 2010; Davis, 2017).
Benevolence means the goodwill of the business to meet the needs
of the consumer and at the same time to prevent harm to the consumer.
Philanthropy is the ability of a business to put consumer interests above
its own, demonstrating a genuine concern for the well-being of consumers
(Ginosar & Ariel, 2017; Gregory et al. 2017).
Benevolence also includes the intention to act benevolently towards the
consumer when a new circumstance arises in which the business has not
committed. In this sense, consumers should be convinced that the business
is working to do good things for the consumer beyond the understanding of
profitability (Falk & Hagsten, 2015; Fang et al. 2014).
Honesty, on the other hand, refers to the degree to which businesses fulfill
the promises made to consumers. The dishonesty of e-commerce operators
and the lack of privacy and/or security of the internet environment have
negative consequences in gaining consumers (Bianchi & Mathews, 2016).
Integrity in e-commerce reflects the consumer belief that the business will
deliver products and services to consumers without any problems, keep
private and financial information confidential, and keep its promises on
sensitive issues such as the like (Bravo et al. 2007; Briley & Aaker, 2006).
The rapid growth in e-commerce depends on many factors such as
consumers’ trust in e-commerce sites, products and services (Gregory, et al.
2017). Many studies emphasize that trust is important for consumers to be
enthusiastic about e-commerce. For example, trust has an impact on whether
consumers are willing to connect with a website and provide information.
In addition, it has been shown that a high level of trust is linked to a high
degree of purchase intention (Fang, et al. 2014; Ginosar & Ariel, 2017).
Although internet retailers incur high protection costs to protect their
systems, they can be exposed to security attacks by cyber fraudsters. These
incidents not only lead to loss of revenue for the retail store, but also cause
negative perceptions of transaction security against consumers. For this
reason, it is very important to understand the large investments made to
increase trust and security and to measure them in order to take better action
(Falk & Hagsten, 2015).
48 | Consumer Distrust: Non-Transparent AI Decision-Making Processes

2.2. Elements of Insecurity Caused by Non-Transparent AI


Decision-Making Processes
Although social changes and technology have affected consumer behavior
in every period, various developments such as the pandemic, especially in
the last five years, have caused radical changes in consumers’ daily lifestyles
and purchasing behaviors (Zengin, 2021). In 2019, individuals have started
to show changes such as being ageless and natural, hanging out alone,
being a more conscious consumer, digital togetherness, feeling expert, not
deliberately chasing the very popular (Joy of Missing Out-JoMO), being
self-sufficient, contributing to the world, and wanting speed (Erdem, 2022).
Especially in the years following the pandemic, it is seen that the use of
robots and artificial intelligence has increased in many areas, as consumers
have become much more sensitive individuals in every subject (Zengin,
2020).
In this period, consumers have started to use smart watches, smart
homes, smart vacuum cleaners, and even their willingness to pay more for
these products has increased (Topoyan, 2020). Considering all these; It
can be predicted that the market share of products equipped with artificial
intelligence technologies will increase. Therefore, with the increasing and
widespread use of artificial intelligence-supported technologies in human
life, significant changes have started to occur in decision-making processes
and behaviors as consumers (Uma et al. 2020).
In recent years, consumers have been doing most of their shopping
online. Artificial intelligence is used in these shopping sites and social media
platforms, and this arouses interest and curiosity in consumers (Topoyan,
2020). In response to this interest, virtual assistants and chatbots supported by
artificial intelligence technologies are changing people’s decisions to search,
evaluate and purchase information with personalized recommendations.
However, the way people consume goods and services is guided by artificial
intelligence (Uma et al. 2020).
Natural Language Processing (NLP), which enables understanding
and analyzing human language in understanding and communicating with
customers, machine learning that enables informed decision-making by
data analysis and automatic adaptation, chatbots and virtual assistants to
provide fast, efficient customer support and service, and artificial intelligence
technologies such as predictive analytics and sentiment analysis to provide
fast, efficient customer support and service (Zengin, 2021). Their feelings
and attitudes are much better understood. Artificial intelligence, which
supports consumers in finding the right product, is also a resource for
Murat Başal | 49

influencing consumers. With the spread of this resource, which transforms


different areas of industry, people will not need to use their own minds much,
that is, there will be no cost of intelligence (Erdem, 2022; Zengin, 2020).
The aim of contemporary marketing activities is to satisfy the wishes and
needs of the consumer. It is especially important to know the wants and
needs of the consumer and how they can be satisfied. For this, it is necessary
to examine and know the factors that affect satisfaction or dissatisfaction
(Dmitrieska et al. 2018). A consistent understanding of consumer behavior
is vital to the long-term success of marketing strategies. Businesses attract the
attention of consumers by making use of a number of algorithms developed
from people’s previous preferences, tastes and discourses. Individuals who
surf the internet or social networks by showing them products that they
will like are made to feel as if they definitely need these products. Because
artificial intelligence allows businesses to effectively match information
about the products and services they offer with the information they need
from potential consumers (Doko, 2021; Elgun & Karabıyık, 2022).
Although artificial intelligence applications, which are mostly used for
support purposes, play a key role in coping with the uncertainty of the
decision-making process, it is not currently possible to use them as decision-
makers on their own, since people’s personal experience and thought patterns
guide decision-making with superior intuition (Choi & Lim, 2020). Past
experience, insight and holistic vision are human capital and are human-
specific qualities. These qualities are important for strategic problems that
can be solved with a holistic approach. It is difficult for artificial intelligence
to imitate and replicate human-specific qualities that guide intuitive
decision-making. It is a matter of debate that one day, instead of human-
AI collaboration, artificial intelligence will be able to replace humans in the
decision-making process (Coşkun & Gülleroğlu, 2021).

3. Result
The Decisional process is of great importance for managers who have to
choose among the alternatives in every issue that arises. Artificial intelligence
applications, the use of which has increased significantly recently, allow the
most efficient and efficient use of limited resources by saving time and
cost in the processes in which they are used. In the coming years, with the
sufficient development of artificial intelligence technologies by consumers, it
is observed that artificial intelligence applications will become more involved
in the decision-making processes of companies and company managers.
50 | Consumer Distrust: Non-Transparent AI Decision-Making Processes

Today, artificial intelligence technology is developing and spreading


rapidly in the innovative world. Artificial intelligence technology, which has
created significant changes in many industries and sectors, has started to
be very effective in consumers’ behavior, purchasing habits and decisions.
Artificial intelligence applications enable consumers who are faced with
a wide variety and number of products with various features to reduce
the costs of searching for information, save time and easily decide on the
most appropriate option. Consumers can provide many benefits in online
product selection with artificial intelligence technologies such as virtual and
augmented reality, customer service experiences with smart assistants and
robots, reaching appropriate and fast solutions in determining and meeting
their needs, providing personalized products, experiences, pricing, marketing
messages, campaigns and coupons. Thus, the purchasing decisions of
consumers who gain convenience, comfort and low cost advantages in their
experiences are positively affected.
The effective use of tools such as unmanned devices, internet of things
(IoT), customer relationship management, smart robots in marketing with
artificial intelligence technologies that create great changes in customers’
profiles is very important for businesses. As a matter of fact, thanks to artificial
intelligence, the next steps of customers can be predicted. Thanks to artificial
intelligence algorithms, businesses are provided with great convenience in
understanding and meeting the demands and expectations of consumers and
the changes in these expectations. Therefore, with personalized automation
and relevant content, consumers’ expectations are met and their loyalty
to the business is strengthened. In addition, businesses that develop their
marketing strategies by using the existing data of their customers reduce their
workload by performing even the simplest tasks with artificial intelligence
technology, and new ideas and new content are developed. In this direction,
businesses can accurately and easily identify the needs of consumers and
create marketing efforts accordingly. In addition, businesses need to carefully
evaluate the benefits of artificial intelligence and consider it as an important
opportunity for consumer satisfaction and loyalty.
Murat Başal | 51

Resources
Acar, H. M. & İmik Tanyıldızı, N. (2022). Reklamda yapay zekâ kullanımı: Zi-
raat Bankası #senhepgülümse reklam filminde deepfake uygulamasının
görsel anlatıya etkisi. Kastamonu İletişim Araştırmaları Dergisi (KİAD),
(8), 78-99.
Akbaba, A. İ. & Gündoğdu, Ç. (2021). Bankacılık hizmetlerinde yapay zekâ
kullanımı. Journal of Academic Value Studies, 7(3), 298-315.
Aylak, B. L., Oral, O. & Yazıcı, K. (2021). Yapay zekâ ve makineleştirme teknik-
lerinin lojistik sektöründe kullanımı. El-Cezeri, 8(1), 74-93.
Bayuk, M. N. & Demir, B. N. (2019). Endüstri 4.0 kapsamında yapay zekâ ve
pazarlamanın geleceği. Sciences, 5(19), 781-799.
Bianchi, C., & Mathews, S. (2016). Internet marketing and export market
growth in Chile. Journal of Business Research, 69(2), 426-434.
Biçkin, P. G., Çiçek, M. & Uncular, M. H. (2021). Teknolojinin pazarlamadaki
yeri ve yeni eğilimler: Pegasus Hava Yolları örneği. Gümüşhane Üniversi-
tesi İletişim Fakültesi Elektronik Dergisi, 9(1), 225-254.
Binbir, S. (2021). Pazarlama çalışmalarında yapay zeka kullanımı üzerine betim-
leyici bir çalışma. Yeni Medya Elektronik Dergisi, 5(3), 314-328.
Borgesius, F. Z. & Poort, J. (2017). Online price discrimination and EU data
privacy law. Journal of Consumer Policy, 40(3), 347-366.
Bravo Gil, R., Fraj Andrés, E., & Martinez Salinas, E. (2007). Family as a sour-
ce of consumer-based brand equity. Journal of Product & Brand Manage-
ment, 16(3), 188-199.
Briley, D. A., & Aaker, J. L. (2006). Bridging the culture chasm: Ensuring that
consumers are healthy, wealthy, and wise. Journal of Public Policy & Mar-
keting, 25(1), 53- 66.
Chen, M. (2018). Improving website structure through reducing information
overload. Decision Support Systems, 110, 84-94.
Choi, J. A. & Lim, K. (2020). Identifying machine learning techniques for clas-
sification of target advertising. ICT Express, 6(3), 175-180.
Coşkun, F. & Gülleroğlu, H. D. (2021). Yapay zekânın tarih içindeki gelişimi ve
eğitimde kullanılması. Ankara University Journal of Faculty of Educatio-
nal Sciences (JFES), 1-20.
Darley, W. K., Blankson, C., & Luethge, D. J. (2010). Toward an integrated
framework for online consumer behavior and decision making process:
A review. Psychology & Marketing, 27(2), 94-116.
Davis, R. (2017). A comparison of online and offline gender and goal direc-
ted shopping online. Journal of Retailing and Consumer Services, 38,
118–125.
52 | Consumer Distrust: Non-Transparent AI Decision-Making Processes

Dimitrieska, S., Stankovska, A. & Efremova, T. (2018). Artificial intelligence


and marketing. Entrepreneurship, 6(2), 298-304.
Doko, E. (2021). Makineler aşık olabilir mi?. M. K. Yılmaz, N. Ö. İyigün
(Ed.), Yapay Zeka: Güncel Yaklaşımlar ve Uygulamala içinde (345-368).
İstanbul: Beta Kitap.
Elgun, M. N. & Karabıyık, H. Ç. (2022). Tutundurmanın bileşenlerinin sınıf-
landırılması üzerine bir teorik tartışma ve bir tanım önerisi. Econder In-
ternational Academic Journal, 6(1), 74-85.
Erdem, B. (2022). Yapay zekanın pazarlamaya etkisi, D. Terzioğlu ve S. S. Kork-
maz, (Ed.). Sosyal Bilimlerde Disiplinlerarası Akademik Çalışmalar için-
de (87-99). İstanbul: Eğitim Yayınevi.
Falk, M., & Hagsten, E. (2015). E-commerce trends and impacts across Euro-
pe. International Journal of Production Economics, 170, 357-369.
Fang, Y., Qureshi, I., Sun, H., McCole, P., Ramsey, E., & Lim, K. H. (2014).
Trust, satisfaction, and online repurchase intention: The moderating role
of perceived effectiveness of e-commerce institutional mechanisms. MIS
Quarterly, 38(2). 407-428.
Ginosar, A., & Ariel, Y. (2017). An analytical framework for online privacy re-
search: What is missing?. Information & Management, 54(7), 948-957.
Gregory, G. D., Ngo, L. V., & Karavdic, M. (2017). Developing e-commerce
marketing capabilities and efficiencies for enhanced performance in busi-
ness-to-business export ventures. Industrial Marketing Management. 78,
146-157.
Ha, S., & Stoel, L. (2012). Online apparel retailing: roles of e-shopping qua-
lity and experiential e-shopping motives. Journal of Service Management,
23(2), 197- 215.
Hagberg, J., Sundstrom, M., & Egels-Zandén, N. (2016). The digitalization
of retailing: an exploratory framework. International Journal of Retail &
Distribution Management, 44(7), 694-712.
Hallikainen, H., & Laukkanen, T. (2018). National culture and consumer trust
in ecommerce. International Journal of Information Management, 38(1),
97-106.
Hanna, R. C., Lemon, K. N., & Smith, G. E. (2019). Is transparency a good
thing? How online price transparency and variability can benefit firms
and influence consumer decision making. Business Horizons, 62(2),
227-236.
Hasan, B. (2010). Exploring gender differences in online shopping attitude.
Computers in Human Behavior, 26(4), 597-601.
Henseler, J., Ringle, C. M., & Sarstedt, M. (2015). A new criterion for asses-
sing discriminant validity in variance-based structural equation modeling.
Journal of the Academy of Marketing Science, 43(1), 115-135.
Murat Başal | 53

Sucu, İ. (2019). Dijital evrenin yeni dünyası olarak yapay zeka ve Her filmi üze-
rine bir çalışma. Yeni Medya Elektronik Dergisi, 4(1), 40-52.
Şahin, E. & Kaya, F. (2019). Pazarlamada Yeni Dönem Endüstri 4.0, Yapay
Zeka ve Akıllı Asistanlar. İstanbul: Çizgi Kitapevi.
Şahinci, D. (2021). Yapay zeka ve reklamcılığın geleceği, (Yayımlanmamış Dok-
tora tezi). İstanbul Üniversitesi Sosyal Bilimler Enstitüsü Radyo Televiz-
yon ve Sinema Anabilim Dalı, İstanbul.
Şalvarlı, M. S. & Kayışkan, D. (2022). Pazarlama alanında yapay zekanın gelişen
rolüne genel bir bakış. İzmir Yönetim Dergisi, 2(2), 106-115.
Tong, X., & Su, J. (2018). Exploring young consumers’ trust and purchase
intention of organic cotton apparel. Journal of Consumer Marketing,
35(5), 522-532.
Topoyan, M. (2022). Endüstri 4.0 ve tedarik zinciri yönetimi, M. Marangoz ve
H. H. Özkoç, (Ed.). Endüstri 4.0 ve İşletme Yönetimi içinde (249-276).
İstanbul: Beta Basım Yayım.
Toufaily, E., Ricard, L., & Perrien, J. (2013). Customer loyalty to a commer-
cial website: Descriptive meta-analysis of the empirical literature and
proposal of an integrative model. Journal of Business Research, 66(9),
1436-1447.
Tsiotsou, R. H. (2016). The social aspects of consumption as predictors of con-
sumer loyalty: Online vs offline services. Journal of Service Management,
27(2), 91- 116.
Uma Devi, N. & Paul V, Maria Tresita.(2020). Artificial intelligence: Pertinen-
ce in supply chain and logistics management. Xi’an Jianzhu Keji Daxue
Xuebao/Journal of Xi’an University of Architecture & Technology, 12,
701-709.
Zengin, F. (2020). Akıllı makine çağı sinemasına giriş: sinema sanatında ya-
pay zekâ teknolojilerinin kullanımı. İletişim Çalışmaları Dergisi, 6(2),
151-177.
Zengin, F. (2021). Yapay zekâ ve kişiselleştirilmiş seyir kültürü: Netflix örneği
üzerinden sanat eserinin hiper kişiselleştirilmesi. TRT Akademi, 6(13),
700-727
54 | Consumer Distrust: Non-Transparent AI Decision-Making Processes
Chapter 5

Algorithmic Biases and Injustice: Ethical and


Practical Dimensions of Artificial Intelligence in
Digital Marketing

Dr. Bahadır Avşar1

Abstract
Digital marketing is undergoing a profound transformation with the rise of
artificial intelligence (AI) algorithms. These technologies process large datasets
to enable personalized campaigns and automation, while simultaneously
introducing ethical and practical challenges. This article praises AI’s potential
in marketing while examining the adverse effects of algorithmic biases, such
as discrimination, loss of consumer trust, and risks to corporate reputation. A
literature-based analysis reveals that biases stem from distortions in training
data, shortcomings in design choices, and socio-cultural contexts. This leads
to the exclusion or mis-targeting of specific groups in segmentation and
targeting processes, creating unfairness in marketing strategies and acting as
a catalyst for deepening societal inequalities. The study proposes solutions,
including technical approaches (e.g., fair data processing techniques),
ethical frameworks (e.g., transparency and accountability), and regulatory
measures (e.g., international standards), offering a holistic framework for the
responsible use of AI.

Introduction
Marketing, which is essentially the art and science of understanding
consumer demands and developing strategic responses to these demands, has
been redefined in the modern era with the impact of digital transformation.
The collection and analysis of online data streams have radically changed
the discipline. At the same time, AI algorithms have taken the capacity to
individualise and mechanise marketing practices to an extraordinary level by
processing large data pools - such as demographics, social media trails and
purchase histories (Gupta, 2024). However, this technological leap has been

1 TRT, Araştırmacı, OrcidID: 0000-0002-6805-537X

[Link]
55
56 | Algorithmic Biases and Injustice: Ethical and Practical Dimensions of Artificial Intelligence...

marred by algorithmic biases based on sensitive attributes such as ethnicity,


gender and age, which threaten not only marketing effectiveness but also
principles of social justice (Pappadà & Pauli, 2022). This chapter aims
to deconstruct the dual nature of AI in digital marketing - its productive
potential and ethical vulnerabilities.
In segmentation and targeting processes, AI-induced biases lead to the
systematic exclusion or disproportionate targeting of specific social clusters,
which fuels consumer discontent and erodes institutional legitimacy (Bigman
et al., 2023). Racial biases documented in some social media platforms’ ad-
targeting algorithms have triggered legal sanctions and public outcry as a
concrete manifestation of this problem (McIlwain, 2023). In this context,
the study focuses on three main questions: (1) How do algorithmic biases
affect the functioning of digital marketing strategies? (2) In what ways do
these biases put consumer trust and corporate reputation at risk? (3) What
conceptual and practical solutions can be put forward for AI’s ethical and
responsible implementation? Based on a systematic review of literature
published between 2015 and 2024 in Web of Science and Scopus databases,
this research rigorously investigates the origins, effects, and ways to mitigate
biases.
The proliferation of AI in digital marketing has increased operational
efficiency and made systemic flaws and societal consequences sharply visible.
Biases in educational data, inadequacies in design decisions, and algorithms
shaped by socio-cultural contexts risk perpetuating discriminatory practices;
for example, personalised pricing models can reinforce inequalities by
disadvantaging low-income consumers (Rathnow, Zeller, & Lederer,
2024). Such practices call into question basic marketing principles such as
fair competition and consumer welfare; at the same time, they jeopardise
the long-term sustainability of businesses by eroding consumer trust - the
cornerstone of brand loyalty (Akter et al., 2022). By scrutinising the tension
between the technical capabilities of AI and its ethical limits, this study aims
to reveal how this technology operates as both a source of innovation and a
tool of injustice.

1. Transformation of Digital Marketing with AI


In the early stages of marketing, mass communication tools such as
print media, radio, and television aimed to appeal to large audiences with
standardised messages. This was the inevitable result of an approach that
ignored individual differences. However, the proliferation of the Internet in
the 1990s heralded the birth of digital marketing; measurable and interactive
Bahadır Avşar | 57

tools such as email campaigns and search engine optimisation (SEO/SEM)


reshaped the basic paradigms of this discipline (Babadoğan, 2024). The rise
of data analytics in the 2000s dramatically increased the capacity of businesses
to monitor and interpret online consumer behaviour. With the explosion of
social media and the growth of e-commerce, the volume of data has reached
a threshold described as ‘big data’ (Pasupuleti, 2024). This transformation
has transformed marketing from a pure communication activity into a data-
driven strategic discipline.
Advanced technologies such as artificial intelligence (AI), machine
learning, natural language processing and predictive analytics have
redrawn the boundaries of digital marketing. Amazon’s recommendation
engines have significantly increased conversion rates by providing precise
recommendations based on individual consumer preferences. At the same
time, Netflix is a concrete example of this transformation by strengthening
audience loyalty through dynamic content distribution (Barat & Gulati,
2024). Predictive analytics can predict market trends with an accuracy of
95% (Liu, 2024), taking the capacity of businesses to forecast demand
and optimise resources to an extraordinary level (Wang, 2024). However,
this technological leap has also brought ethical issues such as data privacy
violations, algorithmic biases and lack of transparency in decision-making
processes (Elkhatibi & Benabdelouhed, 2024). While extolling the
transformative potential of AI, the literature emphasises the urgency of
addressing these risks systematically (Dwivedi, 2024).
The integration of AI into digital marketing has not only increased
operational efficiency but also radically changed the capacity to individualise
consumer experiences. Machine learning algorithms have formed the basis
of hyper-personalised strategies by analysing a broad and sophisticated
spectrum ranging from social media interactions to previous purchase data
(Elkhatibi & Benabdelouhed, 2024). For example, generative AI tools enable
marketers to deepen their strategic focus by automating content creation,
SEO optimisation, and social media management, making companies more
agile and responsive to market demands (Hera, 2024; Mandić, Marković, &
Mulović Trgovac, 2024). However, in this process, the phenomenon known
as ‘filter bubbles’ - the exposure of consumers to a limited range of content
or products - risks overshadowing the dynamic nature of marketing by
suppressing originality and innovation (Babadoğan, 2024). This dilemma
makes it clear that the strategic advantages of AI need to be balanced with
ethical costs.
58 | Algorithmic Biases and Injustice: Ethical and Practical Dimensions of Artificial Intelligence...

Moreover, the role of AI in marketing strategies is not limited to the


individual consumer but affects a broader market ecosystem. Predictive
analytics and dynamic pricing models have the potential to increase
customer satisfaction while maximising ROI through real-time adjustments
(Dwivedi, 2024). However, these practices may alienate consumers if
personalised pricing is perceived as unfair (Rathnow et al., 2024); for
example, algorithms that offer higher prices for low-income groups are
controversial from an ethical and competitive perspective. The literature
suggests that such practices may distort market competition and reduce the
visibility of small-scale businesses, which calls into question the potential of
AI to encourage monopolistic tendencies (Csurgai-Horváth, 2024). In this
context, the transformative impact of AI in digital marketing is not only a
technological issue but also a process of economic and social restructuring.
This technology’s lack of ethical and regulatory framework overshadows
the innovations that AI offers to digital marketing. Challenges such as data
privacy, algorithmic bias, and transparency present the necessity to preserve
consumer trust and maintain responsible marketing practices, necessitating
businesses to commit more robustly to ethical practices (Tang, 2024). On
the other hand, innovative technologies integrated with AI, such as hyper-
personalisation, augmented reality (AR) and the Internet of Things (IoT),
have the potential to shape the future of marketing (Pasupuleti, 2024).

2. Algorithmic Biases: Sources and Effects


While the AI transformation of digital marketing has brought
unprecedented precision and scale to consumer-centric strategies, it has also
introduced serious ethical and operational risks, such as algorithmic bias.
Algorithmic bias occurs when AI models systematically produce erroneous
outputs that favour or disadvantage certain groups, often due to the reflection
of inequalities in data (e.g., biases related to race, gender, or socioeconomic
status) in algorithms, subjective choices in design processes, or the embedded
effects of social norms (Moussawi, Deng, and Joshi 2024; Bigman et al.,
2023). In digital marketing, these biases shape processes ranging from
targeted advertising to personalised content recommendations, increasing
the risk of discrimination, undermining consumer trust, and jeopardising the
long-term brand value of businesses (Chen, 2024). Therefore, understanding
the origins and dynamics of algorithmic biases is not only a technical issue
but also a strategic imperative at the intersection of marketing science and
ethical responsibility.
Bahadır Avşar | 59

The effects of algorithmic biases on digital marketing are felt across a broad
spectrum, from individual consumer experiences to societal structures. When
consumers perceive unfair or discriminatory outputs from biased algorithms
(e.g., ad targeting that systematically excludes certain demographic groups),
their trust in and willingness to engage with digital platforms may decrease,
leading to erosion of brand loyalty and reputational damage to businesses
(Chen, 2024; Shin, 2024). From a broader perspective, these biases can
reproduce social inequalities, limiting access to services and information
for marginalised communities, thus creating a cycle that deepens the digital
divide. In economic terms, biased algorithms have the potential to affect
market dynamics profoundly. Algorithmic bias can distort competition by
favouring certain groups, lead to inefficient market outcomes and suppress
innovation by providing unfair advantages, thus undermining overall market
efficiency (Huang et al., 2024; Basshuysen, 2022). Armed with big data
and artificial intelligence, companies can use these biases to their advantage
to disadvantage competitors, which may increase market consolidation.
At the same time, algorithms that exploit consumers’ cognitive biases and
information asymmetries may trigger suboptimal purchasing decisions,
leading to unnecessary or overpriced products, eroding consumer welfare
and deepening economic inequalities (Bar-Gill et al., 2023).
In this context, addressing algorithmic biases is a prerequisite for AI’s
ethical and practical use in digital marketing. The potential for efficiency and
innovation offered by AI systems can only be realised through adherence
to the principles of fairness, transparency and accountability at all stages,
from these systems’ design to implementation (Samala & Rawas, 2024).
Otherwise, the risk of prejudices reinforcing existing social structures
overshadows the promised benefits of technology.

2.1. Sources of Prejudice


Algorithmic biases in digital marketing have a sophisticated multi-layered
web of origins that are not limited to the bias of data sets; they derive from
the design paradigms of algorithms (Akter et al., 2022), the socio-cultural
contexts in which they are implemented (Singh, 2023), and the strategic
prioritisation or revenue-driven models of businesses (Csurgai-Horváth,
2024). For example, an algorithm designed to optimise cost-effectiveness
may inadvertently produce discriminatory outputs by inadvertently targeting
demographic groups that are less cost-effective to target; such systems may
reinforce gender inequalities by systematically making women less visible,
as Lambrecht and Tucker (2016) show in their gendered ad targeting in
STEM fields. Similarly, social media platforms can marginalise minority
60 | Algorithmic Biases and Injustice: Ethical and Practical Dimensions of Artificial Intelligence...

perspectives by highlighting content that aligns with dominant cultural


norms, suggesting that algorithmic processes are technical and function as a
mirror reflecting socio-economic power dynamics (Singh, 2023). Moreover,
the business models of digital platforms to maximise user engagement or
profit can lead to the stratification of biases by creating an unfair distribution
across content types and demographics (Csurgai-Horváth, 2024).
Design Bias: Design bias emerges as a structural flaw arising from the
construction processes of algorithms and can systematically favour specific
results over others through the basic assumptions of the model, data selection
and method preferences (Akter et al., 2022). Such biases are embedded in
the technical architecture of algorithms and often derive from the conscious
or unconscious decisions of the developers. For example, an algorithm
prioritising cost-effectiveness may favour demographic characteristics
requiring fewer resources to target. This bias is not only limited to individual
outputs; it can also shape the long-term orientation of marketing strategies,
systematically restricting the visibility and reach of certain groups. Design
bias is thus a crossroads that illustrates the tension between the technical
optimisation goals of algorithmic systems and ethical implications.
Contextual Bias: Contextual bias emerges as a reflection of the socio-
cultural environment in which algorithms are implemented and is shaped
by the infiltration of cultural norms, social dynamics and historical biases
into algorithmic decision-making processes (Akter et al., 2022). Such biases
show that rather than being neutral tools, algorithms have a symbiotic
relationship with the social structures in which they operate. For example,
social media platforms may favour content that aligns with dominant cultural
tendencies, overshadowing minority voices or alternative perspectives.
Singh’s (2023) analysis strikingly illustrates how these dynamics accelerate
the marginalisation of minority communities in the digital space. This
suggests that algorithms internalise the data and the context in which the
data is collected and interpreted, proving that bias is not just a technical
problem but an extension of social power relations. Thus, Contextual bias
necessitates reassessing marketing strategies regarding cultural sensitivity
and inclusiveness.
Implementation Bias: Implementation bias is shaped by strategic
preferences arising from the way algorithms are used in practice and
the business models of digital platforms; these biases are often driven
by commercial goals such as profit maximisation or user engagement
(Csurgai-Horváth, 2024). In this process, the prioritisation mechanisms
of algorithms may favour users with specific demographics or behavioural
Bahadır Avşar | 61

patterns, creating an unfair distribution of content access and visibility. This


type of bias illustrates the conflict between the economic logic of digital
marketing and its ethical responsibilities, as profit-driven optimisation can
often have consequences that ignore social diversity and equality (Csurgai-
Horváth, 2024). Implementation bias, therefore, raises questions about
how algorithmic systems are designed, how they are deployed, and what
purposes they serve.

2.2. Effects on Marketing Strategies


Algorithmic bias is emerging as a factor that profoundly affects key
marketing strategies, transforming how businesses interact with consumers
while potentially opening the door to unfair or discriminatory practices.
These biases arise from distortions in data sets, structural flaws in the design
of algorithmic models and the way they are applied in different contexts,
with serious consequences for customer equity and marketing effectiveness.
Firstly, customer segmentation is one of the areas where the most
apparent effects of algorithmic bias are observed. Biased data or models
can lead to the overrepresentation of certain demographic groups or the
systematic omission of others.
Secondly, personalisation and targeting processes can be significantly
distorted by the influence of biased algorithms. Chen (2024) shows that bias
disrupts personalisation efforts by producing recommendations that do not
align with customer preferences or needs. For example, a machine learning
model prioritising certain features over others may inadvertently exclude
some customer segments or serve irrelevant content.
Third, pricing and promotion strategies are the areas where algorithmic
bias’s ethical and practical implications are most strikingly evident. Biased
algorithms can lead to discriminatory pricing practices towards specific
customer segments; for example, biased datasets may offer some groups
unfair pricing advantages while disadvantaging others. Similarly, biases in
the distribution of promotions can undermine overall marketing effectiveness
by preventing promotional efforts from reaching the entire customer base
equally. This not only undermines consumer confidence but can also expose
businesses to legal and ethical scrutiny.

2.3. Segmentation and Prejudices


In marketing segmentation, algorithms have emerged as indispensable
tools that provide businesses with targeted strategies, personalised
experiences and optimised resource allocation by segmenting customer bases
62 | Algorithmic Biases and Injustice: Ethical and Practical Dimensions of Artificial Intelligence...

based on shared characteristics, preferences and behaviours. However, in


this process, algorithmic biases emerge as an important factor that threatens
segmentation models’ accuracy, effectiveness and fairness. They are fed by
multiple sources ranging from data collection methods, consumer behaviour
assumptions and the algorithmic design.
Algorithms play a fundamental role in marketing segmentation; methods
such as K-means, DBSCAN and agglomerative hierarchical clustering
provide valuable outputs to businesses by revealing hidden patterns in
large datasets. K-means stands out for its simplicity and efficiency; for
example, when integrated with RFM analysis, it has been shown to segment
consumers based on behavioural patterns with 95% accuracy (Sarkar et al.,
2024). DBSCAN performs better in irregular data distributions and noisy
environments (Boyko & Protsik, 2024), while agglomerative hierarchical
clustering offers global and local perspectives on complex data types (Panda
et al., 2024). These algorithms increase customer satisfaction by enabling
personalised marketing strategies (Potluri et al., 2024), maximise return
on investment by optimising resource allocation (Reddy et al., 2024), and
strengthen strategic decision-making processes by revealing hidden trends
(Potluri et al., 2024). However, these benefits are overshadowed by the
bias-prone nature of the algorithms.
Biases in segmentation processes originate from multiple sources and
call into question the reliability of the models. Inaccuracies in self-reported
data, such as reporting inconsistencies in postcode-based geodemographic
segmentation, can be influenced by demographic factors and produce skewed
results (Gladden et al., 2015). Behavioural biases can bias segmentation
models by deriving from irrational consumer preferences and decision-
making processes (Guhl et al., 2020). Methodological and theoretical
shortcomings exacerbate biases due to segmentation frameworks failing to
address consumer behaviour holistically (Ji, 2003). Furthermore, choosing
loss functions - for example, Cross Entropy or Dice losses - can lead to
biased segmentation outputs (B. Liu et al., 2024). Social influence and
position biases complicate segmentation, especially in freemium markets,
by basing consumer preferences on social dynamics rather than product
attributes (Berbeglia, Berbeglia, & Hentenryck, 2021), while economic
factors shape segmentation strategies through conditions such as demand
and supply elasticity (Martin & Zwart, 1987).
The impact of biases on segmentation is a technical issue and an ethical
responsibility. Algorithms can perpetuate discrimination by inheriting
social biases found in educational data; for example, AI-driven targeting
Bahadır Avşar | 63

can reinforce biases associated with protected characteristics such as race or


socioeconomic status, and this has been demonstrated by fairness measures
such as Disparate Impact (DI) (Soni, 2024). In fields such as healthcare,
biased data can lead to inaccurate predictions (Goankar, Cook, & Macyszyn,
2020), while personalised ads can violate ethical standards by providing
discriminatory recommendations to low-income groups (Parasrampuria &
Williams, 2023). This brings with it the risk that segmentation models may
produce misleading and unfair results, jeopardising customer equity and
brand reputation.

3. Effects on Consumer Trust and Corporate Reputation


The widespread use of algorithms in marketing strategies has profound
and multifaceted impacts on consumer trust and corporate reputation. These
effects are mainly due to algorithms’ biased outputs, lack of transparency
and potential to lead to unethical practices. Academic literature reveals that
algorithmic decision-making processes directly shape consumers’ perceptions
of brands and that these perceptions play a decisive role in the construction
or destruction of trust (Susarla, Purnell, & Scott, 2024). In particular, cases
where biased algorithms create perceptions of unfairness erode consumer
trust in businesses while simultaneously exposing corporate reputation to
long-term risks. This dynamic affects not only individual consumer-brand
relationships but also the broader structure of market competition and the
social fabric.
The impact of algorithms on consumer trust is not only a technical issue
but is also noteworthy for its social and psychological dimensions. Non-
transparent algorithmic processes reinforce consumers’ sense of loss of
control over these systems and accelerate the erosion of trust (Dezao, 2024).
In terms of corporate reputation, the effects of algorithms should be
examined across a spectrum that encompasses both short-term operational
outcomes and long-term strategic positioning. Scandals caused by biased
or manipulative algorithms can lead to reputational damage by identifying
brands with unethical practices. Moreover, the potential for algorithms to
reinforce systemic inequalities exposes businesses to individual consumer
backlash and societal criticism (Koene, 2017). In this context, the impact of
algorithms on consumer trust and corporate reputation emerges as an area
that tests not only the technological competencies of businesses but also
their ethical stance and social responsibilities, which necessitates algorithmic
governance to become a strategic priority.
64 | Algorithmic Biases and Injustice: Ethical and Practical Dimensions of Artificial Intelligence...

3.1. Loss of Trust


The destructive impact of algorithmic biases on consumer trust is a powerful
dynamic resulting from perceived unfairness and lack of transparency. The
inaccurate price predictions of Zillow’s iBuying algorithm, for example, not
only led to financial losses but also raised serious doubts about the reliability
of artificial intelligence, clearly demonstrating the fragility of algorithms
and their psychological impact on consumers (Susarla, Purnell, & Scott,
2024). Similarly, the systematic exclusion of communities of colour by race-
based ad targeting has caused an intense consumer backlash against brands
and shaken the foundation of trust (McIlwain, 2023). Lack of transparency
further complicates this process; consumers feel manipulated or neglected
when they do not understand decision-making processes (Dezao, 2024).
Such incidents show that businesses need to design algorithmic systems
in a way that is not only compatible with technical accuracy but also with
consumer perceptions and ethical norms; otherwise, loss of trust can lead to
irreversible erosion of customer loyalty and market share.

3.2. Reputation Risks


The impact of algorithms on corporate reputation is dramatically
manifested by the blows to brand perception caused by unethical practices
and manipulative campaigns. While such incidents lead to sales losses in the
short term, they trigger reputational erosion in the long term, permanently
weakening the perception of the credibility of brands (Akter et al., 2022).
Moreover, the distortions created by algorithms in market competition create
an environment where large players gain unethical advantages, especially by
disadvantaging small businesses (Csurgai-Horváth, 2024). This dynamic
exposes businesses to individual consumer backlash and an industry-wide
wave of ethical questioning, suggesting that algorithmic strategies should
be evaluated not only from a profitability perspective but also from a
reputational capital perspective. Reputational risks are thus becoming a
central element in the strategic planning of businesses.

3.3. Social Impacts


The societal impacts of algorithms encompass a domain where bias and
personalised content transcend individual consumer experiences to become
a force shaping the social fabric. The disinformation amplification of biased
algorithms exacerbates social polarisation by creating echo chambers that
reinforce users’ existing beliefs; this systematically undermines the capacity
for dialogue and compromise (Shin, 2024). More importantly, the potential
for algorithms to reinforce systemic inequalities leads to the reproduction
Bahadır Avşar | 65

of historical injustices in the digital age, suggesting that the societal


consequences of AI are not merely a side effect but a fundamental design
issue (Koene, 2017). These impacts shift the responsibility of businesses
from being limited to the consumer to a broader social context; algorithms
are thus positioned as both a technological tool and a social actor. When
businesses ignore these impacts, they risk their reputation and social
legitimacy, a caveat that necessitates an ethical and inclusive framing of
algorithmic design.

4. Solution Suggestions
The biases of algorithms in marketing strategies and their adverse
effects on consumer trust and corporate reputation require businesses
and academics to develop comprehensive and multidimensional solutions.
These solutions range from technical innovations to ethical principles and
regulatory frameworks, as the risks of algorithmic systems are not limited
to data processing or model design but extend to broader areas such as
social dynamics and organisational legitimacy. The literature suggests that
such approaches can go beyond reducing bias and increase algorithms’
transparency, improve consumer perceptions and strengthen brand
credibility in the long run (Chen, 2024; Rebitschek, 2024). The proposed
solutions offer a strategic framework combining operational efficiency and
ethical responsibility in this context.
The resolution of algorithmic biases is a technical issue and an endeavour
for businesses to rebuild trust in their relationship with consumers and
maintain their social acceptability. While technical solutions aim to reduce
bias through, for example, fair data processing methods, ethical frameworks
enshrine fairness and privacy as fundamental principles in the design of
algorithms (Soni, 2024). However, the success of these efforts depends on
being supported by international regulatory standards that go beyond the
capacity of individual businesses, with examples such as the GDPR and the
EU Artificial Intelligence Act proving to deliver tangible advances in data
privacy and transparency (Al-Haj Eid et al., 2024; Hulicki, 2023). This
tripartite structure - technical, ethical and regulatory - has the potential
to systematically minimise the risks of algorithms while preserving their
potential advantages.
From a broader perspective, the proposed solutions developed for
algorithms have the power to shape the future role of AI beyond addressing
current problems. By implementing these solutions, businesses can reverse
the loss of trust and reputational erosion caused by biased systems; in the
66 | Algorithmic Biases and Injustice: Ethical and Practical Dimensions of Artificial Intelligence...

process, they can seize the opportunity to establish a more transparent and
fair relationship with consumers (Bar-Gill, Sunstein, & Talgam-Cohen,
2023).

4.1. Technical Solutions


Addressing algorithmic bias at a technical level requires innovative
strategies ranging from data handling processes to model design; this is a key
element in improving the fairness and effectiveness of marketing practices.
Fair data processing methods, such as resampling and using metrics such
as Differential Impact (DI), can prevent discrimination by correcting
imbalanced data sets; these techniques produce more inclusive results by
intervening in the source of bias (Soni, 2024). In addition, explainable AI
(XAI) presents algorithmic processes transparently to consumers through
tools such as decision trees; this not only strengthens trust but also increases
the accountability of businesses (Chen, 2024). These technical solutions
emphasise that algorithms should be developed not only with a focus on
accuracy and efficiency but also with an ethical and consumer-oriented
approach so businesses can maximise their technological advantages while
minimising the risks of bias.

4.2. Ethical Frameworks


Ethical frameworks aim to reverse the adverse effects of bias on
consumer trust by establishing fairness, privacy and transparency as
fundamental principles in the design and implementation of algorithms.
Transparency restores trust by clearly explaining the workings of algorithms
to consumers, which is especially critical in situations where perceptions of
privacy violations are widespread (Rebitschek, 2024). On the other hand,
ethical design prioritises privacy and fairness principles from the outset of
the development process, ensuring that algorithms function in line with
technical performance and societal values (Sharma & Sharma, 2023).
These frameworks require businesses to meet legal requirements, consumer
expectations, and ethical standards so that algorithms can move from being
a risk factor to an indicator of corporate responsibility.

4.3. Regulatory Approaches


Regulatory approaches aim to ensure consumer protection and
organisational accountability by providing international standards and
cooperation mechanisms to address the systemic effects of algorithmic biases.
Regulations such as the GDPR put data privacy in a strong framework
(Al-Haj Eid et al., 2024), while the EU Artificial Intelligence Act reduces
Bahadır Avşar | 67

the societal risks of algorithmic systems by mandating transparency and


ethical practices (Hulicki, 2023). However, the success of these regulations
relies on collaboration between AI developers, ethicists, and regulators;
this multidisciplinary approach generates holistic solutions by considering
not only the technical but also the societal dimensions of biases (Bar-Gill,
Sunstein, & Talgam-Cohen, 2023). This regulatory vision allows businesses
to move towards fairer and more transparent algorithmic practices while
maintaining a competitive advantage in global markets, thus balancing
algorithms as both a source of innovation and an area of social responsibility.

Conclusion
The rise of artificial intelligence (AI) algorithms in digital marketing
has opened up a unique competitive space for businesses by enhancing
individualised campaigns and data-driven decision-making capabilities
(Gupta, 2024). However, this study reveals that algorithmic biases create
ethical and practical cracks in marketing strategies. Through a systematic
literature review, it has been confirmed that AI systems have the potential
to discriminate based on sensitive factors such as race, gender, and
socioeconomic status; these biases have been observed to either exclude
or falsely target specific social clusters in segmentation processes (Pappadà
& Pauli, 2023; Bigman et al., 2023; Soni, 2024). This undermines
the effectiveness of marketing campaigns, erodes consumer trust, and
can jeopardise organisational legitimacy. This chapter argues that the
transformative power of AI can only be fully realised when it is free from
these shadows.
The origins of algorithmic biases are characterised by systematic distortions
in educational data, deficiencies in design decisions and dynamics shaped by
socio-cultural contexts (Singh, 2023). Findings reveal that these biases are
not mere technical failures; on the contrary, they function as a powerful
catalyst that deepens social inequalities (McIlwain, 2023). For example,
discriminatory practices of personalised advertising algorithms towards low-
income communities create long-term threats to brand loyalty while fuelling
consumer dissatisfaction; this undermines the principle of equal access, the
fundamental promise of marketing (Parasrampuria & Williams, 2023).
Moreover, the lack of transparency in algorithmic decision-making paralyses
accountability mechanisms and calls into question the ethical obligations
of businesses (Nazeer, 2024). This clearly shows that AI is a tool and a
reflection of societal values.
68 | Algorithmic Biases and Injustice: Ethical and Practical Dimensions of Artificial Intelligence...

In this context, placing AI in digital marketing on an ethical footing


requires an urgent and multi-layered intervention. While technical strategies
to eliminate biases - such as justice-oriented data processing techniques
and explainable AI models - form the cornerstones of the solution (Soni,
2024; Chen, 2024), international standards and regulatory frameworks
support these efforts with an institutional discipline (Hulicki, 2023). While
regulations such as GDPR provide a solid foundation for data privacy
protection, the need for more inclusive guidelines based on algorithmic
fairness and transparency is evident (Sharma & Sharma, 2023; Adams-
Prassl et al., 2024). Businesses should prioritise strategic investments in
ethical AI practices to restore consumer trust and maintain competitive
advantage (Yadav, 2024). This is not only an operational imperative but
also a moral imperative for marketing to evolve into a future aligned with
social responsibility.
In conclusion, algorithmic biases present both a threat and an opportunity
as a dilemma shaping the future of digital marketing. This study argues that
in order to harness the transformative potential of AI fully, it is essential
that ethical and technical dimensions are addressed together; this requires
a delicate balance between innovation and fairness. Future research should
strengthen this balance by examining the practical applications of bias
reduction techniques and their long-term effects on consumer perception
(Vasileva, 2020). Thus, marketing strategies can become a field that appeals
to all segments by harmonising technological progress with social welfare.
It should not be forgotten that digital marketing is not only a commercial
discipline but also a mirror of social structures. If not framed ethically,
the advantages of personalisation and automation offered by AI risk
institutionalising discrimination and undermining trust. For example, the
systematic exclusion of minority groups in audience identification processes
not only narrows the consumer base but also jeopardises the long-term
reputation of companies (Bigman et al., 2023).
Bahadır Avşar | 69

References
Akter, Shahriar, Yogesh K. Dwivedi, Shahriar Sajib, Kumar Biswas, Ruwan J.
Bandara, ve Katina Michael. 2022. “Algorithmic Bias in Machine Lear-
ning-Based Marketing Models”. Journal of Business Research 144: 201-
16. doi:10.1016/[Link].2022.01.083.
Al Haj Eid, Mohammad, Mohammad Abu Hashesh, Abdel-Aziz Ahmad Sha-
rabati, Ahmad Khraiwish, Shafig Al-Haddad, ve Hesham Abusaimeh.
2024. “Conceptualizing Ethical AI-Enabled Marketing: Current State
and Agenda for Future Research”. doi:10.20944/preprints202404.0786.
v1.
Babadoğan, Borga. 2024. “Unveiling the Power of AI-Driven Personalizati-
on: Transforming Consumer Behavior in the Age of Digital Marketing”.
Next Frontier For Life Sciences and AI 8(1): 61. doi:10.62802/fj43xy18.
Bar-Gill, Oren, Cass R. Sunstein, ve Inbal Talgam-Cohen. 2023. “Algorithmic
Harm in Consumer Markets”. SSRN Electronic Journal. doi:10.2139/
ssrn.4321763.
Barat, Ayan, ve Krity Gulati. 2024. “Emergence of AI in Marketing and its
Implications”. Lloyd Business Review: 1-24. doi:10.56595/lbr.v3i1.22.
Berbeglia, Franco, Gerardo Berbeglia, ve Pascal Van Hentenryck. 2021. “Market
Segmentation in Online Platforms”. doi:10.48550/arXiv.1511.00750.
Bigman, Yochanan E., Desman Wilson, Mads N. Arnestad, Adam Waytz, ve
Kurt Gray. 2023. “Algorithmic Discrimination Causes Less Moral Out-
rage than Human Discrimination.” Journal of Experimental Psychology:
General 152(1): 4-27. doi:10.1037/xge0001250.
Boyko, Bogdan, ve Irina Protsik. 2024. “USE OF CLUSTERING ALGORIT-
HMS TO SEGMENT THE COMPANY’S PERSONNEL”. Herald of
Khmelnytskyi National University. Technical sciences 333(2): 92-98.
doi:10.31891/2307-5732-2024-333-2-14.
Chen, Changdong. 2024. “How Consumers Respond to Service Failures
Caused by Algorithmic Mistakes: The Role of Algorithmic Interpre-
tability”. Journal of Business Research 176: 114610. doi:10.1016/j.
jbusres.2024.114610.
Csurgai-Horváth, Gergely. 2024. “Regulating Algorithmic Bias as a Key Ele-
ment of Digital Market Regulation”. World Competition 47(Issue 2):
193-212. doi:10.54648/WOCO2024015.
Dezao, Tara. 2024. “Enhancing Transparency in AI-Powered Customer Enga-
gement”. Journal of AI, Robotics & Workplace Automation 3(2): 134.
doi:10.69554/PPJE1646.
Dwivedi, Y. 2024. “AI Virtual Assistants in Human Services: Empowering
Customers and Caseworkers”. International Jurnal Of Scientific Re-
70 | Algorithmic Biases and Injustice: Ethical and Practical Dimensions of Artificial Intelligence...

search in Engineering and Management 08(11): 1-7. doi:10.55041/


IJSREM37870
Elkhatibi, Yassine, ve Redouane Benabdelouhed. 2024. “Digital Revolution:
How AI is Transforming Content Marketing”. International Journal of
Advanced Multidisciplinary Research and Studies 4(5): 775-77. doi:10.
62225/2583049X.2024.4.5.3324.
Gladden, James M., George R. Milne, ve Mark A. McDonald. 2015. “Biases
in Self-Reports of Zip Codes and Zip +4 in(Pappadà ve Pauli 2022)
Geodemographic Segmentation”. Içinde Proceedings of the 1997 Wor-
ld Marketing Congress, Developments in Marketing Science: Proce-
edings of the Academy of Marketing Science, ed. Samsinar Md Sidin
ve Ajay K. Manrai. Cham: Springer International Publishing, 85-94.
doi:10.1007/978-3-319-17320-7_23.
Goankar, Bilwaj, Kirstin Cook, ve Luke Macyszyn. 2020. “Ethical Issues Ari-
sing Due to Bias in Training A.I. Algorithms in Healthcare and Data
Sharing as a Potential Solution”. AI Ethics Journal 1(2). doi:10.47289/
AIEJ20200916.
Guhl, Daniel, Katharina Dowling, Daniel Klapper, Martin Spann, Lucas Sti-
ch, ve Narine Yegoryan. 2020. “Behavioral Biases in Marketing”. Jour-
nal of the Academy of Marketing Science 48(3): 449-77. doi:10.1007/
s11747-019-00699-x.
Gupta, Dr. Ruchi. 2024. “The impact of Artificial intelligence on Marketing
Strategies: A Comprehensive Analysis”. International Journal for Rese-
arch in Applied Science and Engineering Technology 12(11): 270-71.
doi:10.22214/ijraset.2024.65002.
Hera, Octavian Dumitru. 2024. “Exploring Marketing Transformation in the
Age of Artifical Intelligence”. Journal of Financial Studies 9(Special): 96-
108. doi:10.55654/[Link].07.
Huang, Yijun, Qiteng Chen, Lie Luo, ve Zhongyan Lin. 2024. “Algorithmic
Discrimination and Market Competition: Exploring the Ethical and Le-
gal Issues of Algorithm Management by Internet Companies”. Philosop-
hy and Social Science 1(5): 22-27. doi:10.62381/P243504.
Hulicki, Maciej. 2023. “The principles of algorithmic justice in the digital mar-
ket”. İçinde Research Handbook on Digital Trade, ed. David Collins ve
Michael Geist. Edward Elgar Publishing, 345-68. doi:10.4337/978180
0884953.00032.
Ji, Luo. 2003. “Market Segmentation Research: Critical Review and Perspecti-
ves”. Journal of Shandong University.
Koene, Ansgar. 2017. “Algorithmic Bias: Addressing Growing Concerns [Le-
ading Edge]”. IEEE Technology and Society Magazine 36(2): 31-32.
doi:10.1109/MTS.2017.2697080.
Bahadır Avşar | 71

Lambrecht, Anja, ve Catherine E. Tucker. 2016. “Algorithmic Bias? An Em-


pirical Study into Apparent Gender-Based Discrimination in the
Display of STEM Career Ads”. SSRN Electronic Journal. doi:10.2139/
ssrn.2852260.
Liu, Bingyuan, Jose Dolz, Adrian Galdran, Riadh Kobbi, ve Ismail Ben Ayed.
2024. “Do We Really Need Dice? The Hidden Region-Size Biases of Seg-
mentation Losses”. Medical Image Analysis 91: 103015. doi:10.1016/j.
media.2023.103015.
Liu, Jinwen. 2024. “Marketing Strategy Matching Algorithm Under Artificial
Intelligence”. İçinde 2024 Third International Conference on Distribu-
ted Computing and Electrical Circuits and Electronics (ICDCECE), Bal-
lari, India: IEEE, 1-5. doi:10.1109/ICDCECE60827.2024.10548817.
Mandić, Antonija, Biljana Marković, ve Ana Mulović Trgovac. 2024. “Tools
of Artificial Intelligence Technology as a Framework for Transformati-
on Digital Marketing Communication”. Tehnički glasnik 18(4): 660-65.
doi:10.31803/tg-20240708161118.
Martin, S.K., ve A.C. Zwart. 1987. “MARKETING AGENCIES AND
THE ECONOMICS OF MARKET SEGMENTATION*”. Aust-
ralian Journal of Agricultural Economics 31(3): 242-55. do-
i:10.1111/j.1467-8489.1987.tb00467.x.
McIlwain, Charlton D. 2023. “Algorithmic Discrimination: A Framework
and Approach to Auditing & Measuring the Impact of Race-Tar-
geted Digital Advertising”. SSRN Electronic Journal. doi:10.2139/
ssrn.4646843.
Moussawi, Sara, Xuefei (Nancy) Deng, ve K. D. Joshi. 2024. “AI and Disc-
rimination: Sources of Algorithmic Biases”. ACM SIGMIS Database:
the DATABASE for Advances in Information Systems 55(4): 6-11.
doi:10.1145/3701613.3701615.
Oleg Ye., Backsanskiy. 2025. “Balancing Ethics and Innovation in Artificial Intel-
ligence”. Society: philosophy, history, culture (1): 23-33. doi:10.24158/
fik.2025.1.2.
Panda, Amiya Ranjan, Priyanka Rout, ve Aman Gautam. 2024. “Optimizing
Marketing Strategy by Performing Customer Segmentation”. İçinde
2024 International Conference on Advances in Computing Research
on Science Engineering and Technology (ACROSET), Indore, India:
IEEE, 1-6. doi:10.1109/ACROSET62108.2024.10743643.
Pappadà, Roberta, ve Francesco Pauli. 2022. “Discrimination in machine lear-
ning algorithms”. doi:10.48550/ARXIV.2207.00108.
Parasrampuria, Anirudh, ve Katherine Williams. 2023. “Ethical Considerations
and Societal Impact of Personalized Advertising Algorithms”. Journal of
Student Research 12(4). doi:10.47611/jsrhs.v12i4.5910.
72 | Algorithmic Biases and Injustice: Ethical and Practical Dimensions of Artificial Intelligence...

Pasupuleti, M.K. 2024. “Transforming Digital Marketing with AI: Strategies


for Personalized Content and Ethical Advertising”. İçinde AI in Digital
Marketing: Personalizing Content and Advertising Strategies, National
Education Services, 1-18. doi:10.62311/nesx/66296.
Potluri, Chandra Srinivas, G. Srinivasa Rao, L.V.R Manoj Kumar, Kedir Geletu
Allo, Yaregal Awoke, ve Abdilkerim Asrar Seman. 2024. “Machine Le-
arning-Based Customer Segmentation and Personalised Marketing in Fi-
nancial Services”. İçinde 2024 International Conference on Communica-
tion, Computer Sciences and Engineering (IC3SE), Gautam Buddha Na-
gar, India: IEEE, 1570-74. doi:10.1109/IC3SE62002.2024.10593143.
Rathnow, Peter, Benjamin Zeller, ve Matthias Lederer. 2024. “From Code
to Coin: Unravelling the Practical Business Impact of Algorithmic Pri-
cing”. Journal of AI, Robotics & Workplace Automation 3(3): 234.
doi:10.69554/YMZP4012.
Rebitschek, Felix G. 2024. “Boosting Consumers: Algorithm-Supported De-
cision-Making under Uncertainty to (Learn to) Navigate Algorithm-Ba-
sed Decision Environments”. Içinde Knowledge and Digital Technology,
Knowledge and Space, ed. Johannes Glückler ve Robert Panitz. Cham:
Springer Nature Switzerland, 63-77. doi:10.1007/978-3-031-39101-9_4.
Reddy, Biyyapu Sri Vardhan, C. A. Rishikeshan, VishnuVardhan Daguma-
ti, Ashwani Prasad, ve Bhavya Singh. 2024. “Customer Segmentation
Analysis Using Clustering Algorithms”. Içinde Intelligent Systems, Lec-
ture Notes in Networks and Systems, ed. Siba K. Udgata, Srinivas Set-
hi, ve Xiao-Zhi Gao. Singapore: Springer Nature Singapore, 353-68.
doi:10.1007/978-981-99-3932-9_31.
Samala, Agariadne Dwinggo, ve Soha Rawas. 2025. “Bias in artificial intelli-
gence: smart solutions for detection, mitigation, and ethical strategies in
real-world applications”. IAES International Journal of Artificial Intelli-
gence (IJ-AI) 14(1): 32. doi:10.11591/ijai.v14.i1.pp32-43.
Sarkar, Malay, Aisharyja Roy Puja, ve Faiaz Rahat Chowdhury. 2024. “Op-
timizing Marketing Strategies with RFM Method and K-Means Clus-
tering-Based AI Customer Segmentation Analysis”. Journal of Business
and Management Studies 6(2): 54-60. doi:10.32996/jbms.2024.6.2.5.
Sharma, Animesh Kumar, ve Rahul Sharma. “Considerations in Artificial In-
telligence-Based Marketing: An Ethical Perspective”. Applied Marketing
Analytics: The Peer-Reviewed Journal 9, sy 2 (01 Ekim 2023): 162. ht-
tps://[Link]/10.69554/RAPQ3226.
Shin, Donghee. 2024. “Misinformation and Algorithmic Bias”. Içinde Ar-
tificial Misinformation, Cham: Springer Nature Switzerland, 15-47.
doi:10.1007/978-3-031-52569-8_2.
Singh, Daman Preet. 2023. “Algorithmic Bias of Social Media”. The Motley
Undergraduate Journal 1(2). doi:10.55016/ojs/muj.v1i2.77457.
Bahadır Avşar | 73

Soni, Vishvesh. 2024. “Bias Detection and Mitigation in AI-Driven Target Mar-
keting: Exploring Fairness in Automated Consumer Profiling”. Interna-
tional Journal of Innovative Science and Research Technology (IJISRT):
2574-84. doi:10.38124/ijisrt/IJISRT24MAY2203.
Susarla, Padma, Dexter Purnell, ve Ken Scott. 2024. “Zillow’s Artificial In-
telligence Failure and Its Impact on Perceived Trust in Informati-
on Systems”. Journal of Information Technology Teaching Cases:
20438869241279865. doi:10.1177/20438869241279865.
Susarla, Padma, Dexter Purnell, ve Ken Scott. 2024. “Zillow’s Artificial In-
telligence Failure and Its Impact on Perceived Trust in Informati-
on Systems”. Journal of Information Technology Teaching Cases:
20438869241279865. doi:10.1177/20438869241279865.
Tang, Zicheng. 2024. “The Role of AI and ML in Transforming Marke-
ting Strategies: Insights from Recent Studies”. Advances in Eco-
nomics, Management and Political Sciences 108(1): 132-39.
doi:10.54254/2754-1169/108/20242009.
Van Basshuysen, Philippe. 2023. “Markets, Market Algorithms, and Algorith-
mic Bias”. Journal of Economic Methodology 30(4): 310-21. doi:10.10
80/1350178X.2022.2100919.
Vasileva, Mariya I. 2020. “The Dark Side of Machine Learning Algorithms:
How and Why They Can Leverage Bias, and What Can Be Done to Pursue
Algorithmic Fairness”. Içinde Proceedings of the 26th ACM SIGKDD
International Conference on Knowledge Discovery & Data Mining, Vir-
tual Event CA USA: ACM, 3586-87. doi:10.1145/3394486.3411068.
Wang, Weihan. 2024b. “Artificial Intelligence in Strategic Business De-
cisions: Enhancing Market Competitiveness”. Advances in Eco-
nomics, Management and Political Sciences 117(1): 87-93.
doi:10.54254/2754-1169/117/20241987.
Yadav, Suraj Jaywant. 2024. “AI Bias and Fairness: Ethical Considerations in
Service Marketing Strategies”. IGI Global, 49-64. doi:10.4018/979-8-
3693-7122-0.ch003.
74 | Algorithmic Biases and Injustice: Ethical and Practical Dimensions of Artificial Intelligence...
Chapter 6

The Misleading Power of AI-Powered


Automation

Ali Sen1

Abstract
Automation refers to the use of technology that attempts to perform a
procedure or process without human intervention. Automation technologies
aim to minimise human intervention and increase factors such as efficiency,
productivity, quality, and accuracy. While Artificial Intelligence (AI)-supported
automation solutions offer many advantages for users such as customisation,
recommendation systems and content creation, they also pose risks such as
biased algorithms or data privacy concerns. Despite the growing use of AI-
supported automation systems in the marketing, insufficient studies mention
the risks posed by AI-powered automation systems.
The purpose of this study is to investigate how automation systems are used
in marketing by examining existing research and cases. This study shows how
to improve the customer experiences and highlights the risks that can lead
to consumer dissatisfaction if misconfigured. Using these technologies can
unintentionally cause certain biases. Two issues stand out in the use of these
technologies: Automation bias and algorithmic bias. The first, automation
bias, is associated with users’ overconfidence in automation systems, while
the second, algorithmic bias, refers to misleading effects based on data sets.
This study provides insight into the risks posed by automation efforts, as well
as some suggestions for building consumer trust in marketing.

Automation means the use of technology that attempts to perform a


procedure or a process without human intervention. A typical automated
system includes three basic elements. A power source to operate the system,
a program of instruction and a control system (Groover, 2018). The overall
aim of automation technologies is to minimise human intervention and
to increase factors such as efficiency, productivity, quality and reliability.
(Goldberg, 2011; Sing & Namekar, 2020). Looking at the history of

1 PhD Lecturer, University of Istanbul, [Link]@[Link], ORCID ID

[Link]
75
76 | The Misleading Power of AI-Powered Automation

automation, it has been shaped by the replacement of manual labour with


machinery, and today it has evolved from simple devices to computer-based
technology (Hitomi, 1994; Jasnssen et al., 2019). This historical journey
dates back to four million years. It traces back to the beginning of human
beings using simple tools and the beginning of developments in production
(Hitomi, 1994). Many of today’s automation systems are integrated with
Artificial Intelligence (AI)-supported systems (Maedche et al., 2019; Van
Esch et al., 2021) (Maedche et al., 2019; Van Esch et al., 2021). The
transformation to artificial intelligence (AI)-assisted automation has offered
great opportunities to improve user experiences or increase efficiency in
business processes in many areas (e.g. Huseynov, 2023; Mirwan, 2023).
(e.g. Huseynov, 2023; Mirwan, 2023).
While AI-supported automation solutions offer many advantages for
users such as customisation, recommendation systems and content creation,
they also pose risks such as biased algorithms or data privacy concerns
(Farbo & Shiva, 2024; Palanque et al., 2019; Wertenbroch, 2019). Given
that algorithms are developed based on historical data or specific data sets,
they may further reinforce pre-existing prejudices in society (Karami et al.,
2024). This may lead to discriminatory or marginalising marketing strategies
targeting disadvantaged groups such as gender, race, social status, economic
status, etc (Madanchian, 2024).
Biased algorithms can lead to inaccurate assessments not only in
marketing but also in other areas such as customer-specific recommendation
programs, credit scores and health (Mehrota et al., 2023). These types of
biased systems can cause consumers to perceive automation-based marketing
practices as unfair or biased, which can negatively impact brand trust and
reputation. The lack of transparency in these technologies can lead to injustice
by damaging consumer trust (Madanchian, 2024; Lepri et al., 2018).
Therefore, it is important to know and understand the biases introduced
by automation systems to make the personalization more applicable in
marketing applications and to fully understand customer experiences (Akter
et al., 2023).
The purpose of this study is to investigate how automation systems are
used in marketing to improve the customer experience and to highlight the
risks that can lead to consumer dissatisfaction if misconfigured. Thanks to
technological advances, customers can evaluate the benefits or risks offered
by technology during the purchasing process. For example, while robotic
process automation can improve customer satisfaction by increasing a
company’s efficiency, the customer experience can suffer if automation
Ali Sen | 77

is implemented incorrectly (Gavrilla et al., 2023). In this study, firstly,


some automation systems that improve the consumer experience will be
introduced, and then the misleading effects of automation technologies
and the problems that arise in the interaction with the consumer will be
examined.

1. Consumer Experiences and Automation Solutions


In this section, some automation technologies improving customer
experience will be mentioned for a better understanding of the subject.
Instead of mentioning all technologies, the customer experience of some of
the systems in the field of marketing will be emphasised.

1.1. Chatbots
A chatbot is designed to mimic human speech. These systems utilise
predetermined rules and machine learning algorithms to correctly interpret
and respond to user input. They use Natural Language Processing (NLP)
technology in this process (Huseynov, 2023). The idea of the first
conversational robot emerged in 1950 when computer scientist Alan Turing
wondered whether computers could speak like humans (Adamopoulou &
Moussiades, 2020). Over time, different chatbots have been developed.
For example, “Eliza” (1966), “Parry” (1972), “Jabberwacky” (1988),
“TinyMUD” (1991), “ALICE” (1995).”, In 2001, chatbots were further
developed and the SmarterChild chatbot was introduced, which could
interact with users and contribute to non-formal learning (Molnár & Zoltán,
2018).
Today’s chatbots are further enhanced by generative pre-trained models
(Generative Pre-trained Transformers, GPT). ChatGPT developed by
OpenAI (Huseynov, 2023); Microsoft’s Bing chatbot, Bard developed by
Alphabet, and Baidu’s Ernie model (Yıldıran & Erdem, 2024), are among
today’s advanced chatbots. These artificial intelligence-supported systems
will continue to develop as new versions are released. Artificial Intelligence
(AI)-based chatbots improve the user experience with features such as instant
response, personalised answers and 24/7 service. These robots provide great
simplicity in reservation processes such as car rentals, accommodation and
flights, and are widely used in customer service and sales sectors. Users
recognise the ease with which chatbots can instantly respond to inquiries and
provide quick information. They also reduce the staffing needs of businesses
by managing multiple customer requests (Huseynov, 2023).
78 | The Misleading Power of AI-Powered Automation

In addition to improving the customer experience, AI-powered chatbots


can offer fully customised responses by leveraging machine learning
algorithms. This situation increases consumer loyalty by keeping user
satisfaction at a high level (Huseynov, 2023). Through chatbots, customers
can get personalised support by contacting chatbots directly instead of
browsing e-commerce sites, thus saving time and effort. In addition, data
from the interaction of chatbots with the customer provides insights into
the needs of customers and improves the customer experience (Huseynov,
2023).
Furthermore, the study on the usability of chatbots indicated that these
technologies have a significant positive impact on the extrinsic value of
the customer experience (e.g. convenience, time) (Kokkinou and Cranage,
2013). Users find companies that use chatbots innovative (Chen et al.,
2021). On the other hand, problems such as lack of expertise and lack of
context awareness hinder the development of chatbots (Pricilla et al., 2018).
This leads to chatbots offering negative user experience to be perceived
negatively and evaluated as a time-wasting process (Chen et al., 2021).
In summary, chatbots can be defined as an AI-powered or rule-based
technology that provides 24/7 customer support and attempts to solve
basic questions or simple problems without human intervention (Castillo
et al., 2021). They can offer a personalised interaction by leveraging past
interactions and data from user profiles. This reduces the workload and costs
of customer service agents by automating routine tasks.

1.2. Recommender Systems


Another method used in marketing to improve the user experience is to
recommend content or products to customers. Recommender systems with
AI-powered recommender systems are designed to provide alternatives,
make suggestions and evaluate real scenarios by collecting information from
data to address users’ problems or questions (Xu et al., 2020).
Recommender systems developed to improve user experience can be
divided into two main categories. These systems are traditional recommender
systems and automated recommender system (Yang et al., 2021).
Traditional recommender systems collect user preferences in the form of
implicit feedback. These include purchase behaviour (Su and Khoshgoftaar,
2000), click-through rate (Zhou et al., 2018), collaborative filtering (Shi
et al., 2014) or neural networks (He et al., 2017) to build latent spaces for
user preferences. Whereas, automated recommendation systems base their
recommendation on users’ preferences derived directly from live dialogue
Ali Sen | 79

history (Yang et al., 2021). These systems aim to interact with users to
provide them with the desired product (e.g. consumer goods, films, music)
or service.
Sectors such as e-commerce (Maedche et al., 2019; Yang et al., 2021),
entertainment (Palangue et al., 2019) and marketing (Rae et al., 2016)
commonly utilise AI-enabled automated recommender systems. Companies
such as Amazon, Netflix, Starbucks, Spotify, and Alibaba offer personalised
products tailored to consumers by examining their past purchasing
behaviour, searches and browsing history (Mirwan, 2023). Spotify’s
recommendation system analyses millions of songs and users’ listening
habits to provide a weekly playlist personalized for users (Florez Ramos &
Blind, 2020; Mirwan, 2023). Thanks to suggestion systems, users can easily
access the content they want without excessive time and effort and improve
the customer experience.
Digital assistants with recommender systems reduce users’ cognitive load.
When users are exposed to excessive information, recommender systems
help them sort, filter and process relevant information. E-commerce sites
allow the consumer to easily find the product or service they are looking for
without the need for extensive research (Maedche et al., 2019). Similarly,
companies such as Netflix, Amazon, Outbrain, Tabollaa, etc. also use content
or product recommendation systems to provide users with a choice that
may be of interest to them. Thus, these systems save users from searching
extensively (Andre et al., 2018).

1.3. Automated Email and Messaging Systems


Email is one of the most widely used tools for communication, both
professionally and personally. For an individual or an organisation,
communicating via email and receiving a quick response can significantly
improve the customer experience. Using automated email responses is a
good way for an organisation to respond to an email recipient within 24
hours, especially when an email cannot be responded to within 24 hours due
to holidays, workload or leave of absence (Mane & Rayappa, 2022).
Besides individual communication, e-mail also plays an extensive role in
the workplace. It is used in the workplace not only as a communication
tool but also as a work hub. Studies have shown that e-mail serves as the
main interface in the workplace, providing facilities for planning activities,
organising meetings, transferring files and more (Ducheneaut & Bellotti,
2001). Automated email, also known as behaviour-driven email, refers to
the sending of personalised messages in a predetermined and automated
80 | The Misleading Power of AI-Powered Automation

manner based on an action that a customer or user takes (or does not
take) (Vaughan, 2012). Email automation saves time by creating targeted,
contextualised and personalised emails to send to the relevant recipient. For
example, after a customer purchases a product, a personalised e-mail can be
sent to the customer saying ‘Check out other products similar to the ones
you bought’ (Vaughan, 2012). Similarly, a consumer who buys a product
from an e-commerce platform can be sent an automated e-mail about the
progress of the order (Kushmerick et al., 2015).
Today, business life is closely related to e-mail and a significant part of the
employees’ day is spent using e-mail (Grevet et al., 2014). Therefore, there
has long been a desire to automate various aspects of this process because of
the workload generated by e-mail. These efforts date back to Procmail, an
email filtering program released in 1990 that allowed users to automatically
send certain mail to certain folders (Park et al., 2019). Similarly, the
Boomerang application saves users time by eliminating difficulties caused
by back-and-forth email traffic, time zone conflicts, and errors due to double
bookings. In addition, it has features such as creating automatic reservations,
sharing availability, and offering time suggestions for meetings (www.
[Link]; Park et al., 2019). Such automation tools optimize
business processes and improve user experience.
As a result, for businesses, automating repetitive emails allows employees
to focus on more strategic tasks and analysis-oriented tasks. It allows them to
respond to customers faster (Kushmerick, 2005). Furthermore, automated
e-mail systems integrated with AI can improve the user experience by
providing customised solutions to specific customers. As a result, while
increasing customer satisfaction, it can also positively affect sales in terms of
e-commerce (Abrokwah-Larbi et al., 2024; Ghalme et al., 2023).

1.4. Robotic Process Automation (RPA)


With the advent of the fourth industrial revolution, the use of data from
smart devices has enabled the automation of ordinary rule-based business
processes with Robotic Process Automation (RPA) tools (Ribeiro et al.,
2021). Robotic process automation is a technology that mimics human
interactions through graphical user interfaces and automates business
processes based on user and system interactions (König et al. 2020).
In corporations, this technology is trained to perform repetitive tasks,
automating existing business processes and making them more efficient
(Karn et al. 2019). For example, the telecommunications company O2 has
automated a large part of its customer service. Processes such as SIM card
Ali Sen | 81

replacement, mobile number porting, phone unlocking and switching to


contract lines have been automated through robotic processes. Being able to
fulfil customer service requests shows how robotic processes can be trained
and make business processes more efficient by reducing human intervention
(Madakam et al., 2019). This reduces costs and errors in the workplace and
provides continuous accessibility for customers (Daase et al., 2020).
Through technological growth, users can identify the benefits and risks
of technology in their purchasing process. In particular, RSO can increase
consumer satisfaction and engagement by improving efficiency and agility
in an organisation (Gavrilla et al., 2023). Recent studies report that the
implementation of RSO is efficient in terms of error reduction, cost
reduction and efficiency (Aguirre and Rodriguez 2017). RSO technology
allows many business processes to be facilitated and this allows employees to
work more effectively and make fewer mistakes (Madakam et al., 2019). By
automating repetitive processes, for example, banks devote more resources
to personalised customer service. This increases the chances of responding
more quickly to customer enquiries and requests (Lakshmi et al., 2024).

2. Biases in Automation Technologies


Automation systems have a significantly positive impact on marketing
professionals and customers. However, using these technologies can
unintentionally cause some prejudices. Two issues stand out in the use
of these technologies. Automation bias and algorithmic bias. The first is
automation bias, which is associated with the overconfidence of users in
automation systems, while the second is algorithmic bias, which is the
misleading effects based on data sets. Therefore, customers or experts need
to be more careful when making decisions based on automation systems to
use these technologies correctly.

2.1. Recognizing Automation Bias


Automation bias is defined as the tendency to over-reliance on automated
systems even though it may lead to incorrect decisions. (Goddard et al.,
2012). Automation bias is similar to the biases of individuals in decision-
making processes. According to social psychologists, individuals mostly make
intuitive decisions in their daily lives. Intuitive decision making is to make
inferences quickly and simply. That is, when faced with information overload,
the individual aims to reach a conclusion quickly and make a reasonably
correct decision (Kupfer et al., 2023; Parasuraman & Riley, 1997). Relying
on automated support tools offers a more accessible and acceptable way
for individuals. Faced with information overload, individuals often tend to
82 | The Misleading Power of AI-Powered Automation

avoid complex processes (Mosier & Skitka, 2018). Automation bias and its
negative consequences have been studied in many contexts such as health,
military processes, personnel selection and process control (Kupfer et al.,
2023). This can occur in any area where there is human-system interaction
(Goddard et al., 2012). For example, in critical areas such as aircraft cockpits
and nuclear power plants, the use of automated decision support tools is
common. This situation enables people to make decisions quickly and easily
because they want to make less cognitive effort (Mosier & Skitka, 2018).
When the literature is analysed, two types of errors usually occur in
decision-making technologies that rely on automation. The first is omission
errors. This means that when automation systems work incorrectly or
fail to recognise a problem, people overlook it. The second one is the so-
called commission errors. In this case, people follow incorrect advice and
instructions from automated systems, even though they contradict other
information or without checking alternative information (Skitka et al.,
2000). For example, some organisations use AI-enabled tools to screen CVs
during the recruitment process. AI tools screen candidates by looking at
their CVs or other important points. However, under time pressure, HR
staff accept these recommendations without a detailed scrutiny. This is an
example of automation bias because HR staff may accept the AI algorithm
as reliable and ignore the errors of the system.
Automation bias is thought to be related to individuals’ intuitive
behaviour instead of examining information carefully. There are various
reasons for this situation in the literature. According to Skitka (1990), the
first one is cognitive laziness. People tend to avoid cognitive effort as much
as possible. They prefer intuitive approaches that require the least cognitive
effort rather than thinking about every detail when making decisions. This
tendency is associated with individuals relying on automation without
validation, especially when their tasks are extremely difficult and complex
(Danelid, 2024). The second is the social loafing factor. Individuals are
more lazy in group work because they share their responsibilities with others
(Karau & Williams, 1993). This situation is also seen in human-computer
interaction. Individuals may perceive automation systems as a team mate
and may take less responsibility and make less effort. The third factor is the
human tendency to obey authority. Automated systems are perceived as an
authority because they reduce user errors. From this point of view, when
individuals are faced with the information proposed by automation systems,
they tend to accept it without questioning.
Ali Sen | 83

Automation bias has also been analysed from different perspectives in the
literature. One of these approaches is that individuals tend to rely on first
information when people make decisions. When working with automated
decision systems, individuals see and tend to believe the computer’s decision
before making a judgement (Danelid, 2024). This bias is also related to the
concept of complacency in automated systems. Complacency in automation
systems refers to the situation where individuals working with computerised
systems overly trust automation and do not make necessary controls and
assume that everything is in line (Parasuraman & Manzey, 2010). Thus,
users tend to find automation systems more reliable and accurate in a biased
way.
Automation bias has been seen in many areas, including healthcare,
education, the public sector and government (Goddart et al., 2012). For
example, previous studies of cockpit crews have shown that automation bias
manifests itself in the form of errors of omission and errors of application.
(Vered et al., 2023). Another study examined the impact of automated
diagnostic systems in healthcare on cardiologists’ ECG interpretations. The
study found that these systems reduced the rate of correct diagnoses by
experts and reduced their confidence in their decisions. (Bond et al., 2023).
The most important feature of AI-supported applications in marketing is
market segmentation and personalization. (Mirwan, 2023). While AI can
help marketers and consumers make decisions to design effective campaigns,
these automated systems, especially decision-making systems, can cause
automation bias because consumers or marketers tend to directly accept
decisions or suggestions made by automated systems (e.g. Goddard et al.,
2012; Kulpfer et al., 2023).
In summary, automation bias is the overconfidence in many automation
systems described in the previous section. This trust stems from the bias
of the users, even if the decisions are wrong. Psychologists have explained
this concept with concepts such as social loafing, cognitive laziness, and
the tendency to obey authority. Two types of errors occur in this context.
The first is that experts or decision makers (e.g. consumers, developers)
ignore the incorrect operation of automated systems. The second is that
decision makers recognise the errors of automation systems but still trust
the decisions of automation systems, even if they are supported by different
information.
84 | The Misleading Power of AI-Powered Automation

2.2. Algorithmic Bias


Artificial Intelligence has become increasingly popular as a tool for
increasing efficiency by automating business processes. However, many
researchers and practitioners have also raised concerns about the fairness and
bias of AI (Wang, Harper, & Zhu, 2020; Panch et al., 2019). Specifically,
algorithmic bias causes AI to systematically advantage or disadvantage one
group. (Sen et al., 2020). This has been a major concern in the decision-
making processes and marketing activities of machine learning-based
algorithms (Akter et al., 2022). These biases have led to inequality, unfair
results and discrimination in some cases, questioning the trust in AI (Shin
& Shin, 2023).
Various studies have been conducted on the causes of algorithmic biases
affecting consumers and users. These can be caused by unrepresentative
datasets, poor models, faulty algorithm designs, and human biases when
designing marketing models (Akter et al., 2022). These biases have
manifested themselves in different cases with negative consequences in the
gender, racial and socio-economic status (Akter et al., 2021a). For example,
Amazon’s AI-powered facial recognition system ‘Rekognition’ performed
worse in identifying the gender of dark-skinned individuals and women
(Singer, 2018; Wen & Holweg, 2024). Similarly, in 2016, Google established
an AI-supported tracking system to monitor and prevent hate speech on
websites and social media platforms. However, this system incorrectly labelled
tweets by African Americans as hate speech (Martin, 2019). Another case
involves the algorithms used in Google ads, which have resulted in women
being less likely to be shown high-paying jobs (Patel, 2019). A notable case
of the effects of algorithmic biases in the business world was experienced at
Amazon. In 2014, Amazon implemented an AI-enabled technology system
for recruitment decisions and CV screening processes over one year. After
one year, it was found that this system was trained with biased historical
data, which gave advantages to male candidates and discriminated against
women (Dastin, 2018).
Although algorithmic bias is a common term, some researchers have
argued that the cause of algorithmic biases is that the data used to train the AI
systems are themselves biased (Gupta & Krishnan, 2020). Similarly, another
study has shown that the source of algorithmic bias is methodological and
social bias in the data sets. In particular, algorithms may lead to bias when
these data sets are not representative of the target population, when the
size of the data sets is small, or when factors such as selection bias and
outgroup homogeneity come into play (Akter et al., 2021b). For example,
Ali Sen | 85

according to Weissman (2018), the AI-based system used by Amazon for


recruitment was discontinued because it was biased against women. They
found that the source of the bias was that the data sets they used were mostly
for men. Akter et al. (2021b) also mentioned different issues of algorithmic
biases. These include the small size of data sets, the popularity of some items
over others, and the blind spots that recommendation algorithms create for
users. In addition, these algorithms cause significant limitations in the user
experience by making it difficult to discover certain products.
Another algorithmic bias is methodological bias. In particular, correlation
bias, overgeneralisation of findings, and confirmation bias, where individuals
prefer information that conforms to their beliefs, can methodologically
cause machine learning to produce incorrect results (Thiem et al., 2020). In
addition, another source of algorithmic prejudice is socio-cultural factors.
Socio-demographic characteristics that already exist in society may increase
algorithmic judgements and cause discrimination against disadvantaged
groups based on factors such as religion, gender, ethnicity, etc. (Akter et
al., 2021b). For example, it was noted that some of Facebook’s adverts,
such as for credit, employment and housing, could not be viewed by certain
groups of African origin (Angwin et al., 2017). Similarly, people of black
ethnicity were more likely to encounter biased results related to crime in
Google searches (Kasperkevic, 2015). Some cases of algorithmic errors
include Facebook’s ads showing gender bias (Lambrecht and Tucker, 2018),
Orbitz offering more expensive travel services to Mac users than Windows
users (BBC, 2012) and Uber and Lyft showing higher prices in areas where
African Americans live (Akter et al., 2022).
In this context, with the development of machine learning and AI,
marketers have made strategic decisions in their respective markets by
creating data sets related to users’ behaviour and personality traits. However,
despite this development, biases in the datasets have caused unequal, unfair,
and unjust effects among users. While there are theoretically studies that
explain this issue, there are still insufficient studies in industrial or other
applied fields (Akter et al., 2023).

3. Algorithmic Errors and Strategies to Enhance Consumer


Satisfaction

3.1. Algorithmic Errors and Consumer Satisfaction


Although AI services are transforming business and society, failures have
been seen in some scenarios due to algorithmic errors (Griffith, 2017).
For example, Tesla’s autopilot accidents, bad news suggested in Facebook’s
86 | The Misleading Power of AI-Powered Automation

year-end recommendation photos and videos, Microsoft’s racist Thai AI,


Amazon’s sending wrong e-mails are some of these errors. In addition,
consumers are further frustrated by the uncertainty of why algorithmic errors
are caused and not knowing how to interpret them (Puntoni et al., 2021).
When consumer reactions are analysed in AI-supported services, one
of the main problems is the loss of consumer trust. When the literature
is investigated, if the individual does not have information about the
performance of the algorithm or the human (service provider), they lose
trust in both in a similar way. However, when consumers compare the
recommendations of algorithms with human-assisted services, they are
more likely to distrust algorithms than human-assisted services when they
observe an error or a bad recommendation in the algorithms (Dietvorst,
Simmons, and Massey 2015; Longoni et al., 2023). In other words, people
are less tolerant of errors in algorithms, although they recognise that both
algorithms and humans can make mistakes (Dietvorst et al., 2015).
When consumers’ expectations are not met or when they receive a failed
service, AI failures can often evoke negative emotions in consumers rather
than positive reactions. When a chatbot does not understand the customer’s
problem, gives irrelevant answers or demands excessive information,
frustration, anger, feeling cheated and passive defeat are the most common
reactions among customers (Castillo et al., 2021; Zhang et al., 2024). As
a result of a study, when consumers interact with the chatbot in anger, it
negatively affects consumer satisfaction, firm evaluation and purchase
behaviour (Crolic et al., 2022). In general, when a service experience is
perceived as a positive, consumers interact positively with service providers.
Whereas, AI tools such as chatbots do not meet consumers’ expectations
service failures occur (Gelbrich, 2010). This situation leads to users feeling
angry, frustrated and helpless. This can lead to word-of-mouth marketing,
complaints and customer revenge (Zhang et al., 2024).
There are various studies examining the impact of chatbots on customer
satisfaction (Castillo et al., 2021; Eren, 2021; Kvale et al., 2020). A
qualitative study with twenty-seven customers revealed five different reasons
for unsuccessful interactions between consumers and chatbots. These
issues are difficulties with authenticity, cognitive, emotional, functionality
and integration. (Castillo et al., 2021). When looking at detail of study, it
was found that customers pay attention to cues such as language structure,
repetitive responses and speed of response to understand whether they are
talking to a chatbot or a human (authenticity). In addition, disruption of
the chat flow and misinterpretations by the chatbot are among the cognitive
Ali Sen | 87

challenges. Lack of empathy, lack of personalisation, insufficient effort and


forced interaction were considered as emotional problems. Other major
challenges are integration problems, such as narrow response and limited
options (functionality), lack of human support and disconnected coordination
processes. In summary, chatbots experience different difficulties related to
customer experience, and these problems increase the negative experience.
Unlike human errors, consumers can generally generalise AI errors.
This is because consumers can attribute all errors that occur in an AI
system to the AI systems. Users tend to generalise AI errors more widely
than human errors. In the literature, this effect is described as algorithmic
transfer (Longoni et al., 2023). In general, people perceive AI systems as
a homogeneous group separate from themselves, whereas they perceive
themselves as more heterogeneous and different (Longoni et al., 2023).
Therefore, consumers may generalise the algorithm errors of one AI system to
all other intelligent systems. These generalisations negatively affect consumer
satisfaction (Chen, 2024; Langoni et al., 2023) and their willingness to use
AI services also decreases (Castillo et al., 2021). Generalising AI errors and
not compensating for these errors further increases the customer’s negative
experience and reduces the willingness to use AI systems (Mahmood et al.,
2022).
In addition, the algorithms used by brands do not always perform as
expected, and in some cases even damage the brand (Srinivasan & Sarial,
2021). In marketing, algorithm errors can negatively impact the consumer
experience or damage consumer expectations of brands. In a survey
conducted by the CMO Council and Dow Jones Inc, 78 per cent of chief
marketing officers expressed concern about algorithm errors damaging their
brands (Vizard, 2017). Thus, although AI systems offer innovations and
conveniences that improve the consumer experience, AI failures also cause
significant mistrust. Users tend to generalise the failure of an algorithm to
all AI technologies. Therefore, it is important for brands offering AI-based
services to be transparent about algorithms and to offer solutions without
completely excluding human support. Otherwise, the anger, disappointment
and loss of trust that arise when consumers’ expectations are not met can
damage brand reputation and endanger future business opportunities.

3.2. Suggestions for Improving Customer Experience in


Automation Technologies
Nowadays, organisations are using AI-enabled systems to improve
consumer satisfaction and achieve organisational agility. However,
88 | The Misleading Power of AI-Powered Automation

technological systems sometimes fail to fully meet human expectations.


Especially in emotional and complex tasks, the performance of AI tools
is highly questioned and reactions to algorithmic errors can have serious
consequences. In this section, how consumers perceive the A tools and
under which conditions errors can be minimised will be discussed.
Various suggestions have been made for consumers to compensate for
AI errors and prevent their negative effects on brands. After AI performs
a task, consumers might react less negatively to the errors caused by the
algorithms if the logic or process behind the algorithm is comprehensible.
When consumers can interpret the algorithm, their reactions become even
less negative. This strategy seems more effective during subjective task phases
(Chen, 2024). Another strategy is to acknowledge mistakes and responsibility,
followed by a sincere apology to make amends for AI mistakes. In a study by
Mahmood et al. (2022), an AI agent that admits responsibility and sincerely
apologizes is perceived to be more intelligent and sympathetic and effective
in recovering from mistakes. Therefore, a well-designed apology method
can be a part of an effective strategy for managing the mistakes of AI agents.
However, it should be noted that a poorly designed apology can sometimes
have a more negative impact than no apology at all. In a study with voice
assistants, participants were less willing to use the AI tool if the AI tool
blamed someone else when apologizing, compared to not apologizing at all
(Mahmood et al., 2022).
For emotional and sensitive interactions or more complex tasks, a hybrid
approach is proposed, where both AI tools and human intelligence can
be utilized. This approach will help achieve a balance between consumer
satisfaction and effectiveness (Mikalef et al., 2021). While AI tools can
effectively be used for more routine tasks, it may seem difficult in some
circumstances to replace a human being completely. Therefore, cooperation
between AI tools and human co-operation in inter-organisational marketing
processes is recommended (Mikalef et al., 2021). Users and developers
sometimes overly rely on AI-supported automation systems even though
these systems can make incorrect judgments. However, AI tools may still
have problems understanding humans and identifying their needs accurately.
Therefore, it is recommended that both chatbots and humans are utilized
in online retail transactions to ensure effective human-machine interaction
(Chen et al., 2021).
As a different approach, it is suggested that service providers should
clearly state the limitations of AI to customers. This approach can help
manage customers’ negative reactions and expectations when AI cannot cope
Ali Sen | 89

with complex tasks. Kaplan & Haenlein (2019) suggest clearly explaining
and making AI applications understandable to increase customer experience
and trust towards AI applications. The Singapore Personal Data Protection
Commission (2018) also recommends that AI applications should be
transparent, fair and their mechanisms clearly disclosed.
Making AI decision-making processes more transparent and
understandable can increase customer trust and acceptance (Akter et al.,
2021; Volkmar et al., 2022). In addition, informing or educating customers
and managers about AI capabilities and limitations can create a more realistic
expectation of AI performance. Given that managers and customers are less
tolerant of AI errors than human errors (Dietvorst et al., 2015; Volkmar et
al., 2022), some competencies can be provided to organisations to increase
the AI literacy of managers and users (Long & Magerko, 2020).
As a result, when we consider the above studies, no matter how advanced
the AI used, the user trust and satisfaction will largely depend on the user’s
understanding of the AI systems. To establish healthier and more trust-based
relationships with users, it would be a better approach to be transparent and
take responsibility for mistakes rather than hiding mistakes. The basis of the
success of sustainable AI technologies lies in their human-centred design,
rather than technical perfection.

4. Conclusion
The use of automation technologies to enhance customer experience and
significantly improve business processes is prevalent. However, although
these technologies have an advanced perspective, they carry some risks
and bias that users ignore. Therefore, this situation can also damage the
consumer’s trust. Although AI-supported automation solutions such as
chatbots, recommendation systems, automated e-mail services increase
customer satisfaction, algorithmic errors and automation bias negatively
affect customer satisfaction rather than increasing it.
In this study, the deceptive effects of automation and algorithmic bias
on consumers are analysed. While automation bias means individuals’
overconfidence in automated systems even though they are wrong,
algorithmic bias is the unfair results in certain groups due to biases in
data sets or for different reasons. This situation causes greater ethical
problems, especially in areas such as recruitment, credit rating evaluation,
and advertising. For example, in a recruitment application, AI-supported
systems may cause discrimination against certain groups by only looking at
90 | The Misleading Power of AI-Powered Automation

historical data. Similarly, consumer experience, companies’ reputation and


brand credibility are greatly affected by errors caused by algorithms.
Different strategies have been proposed to increase the efficiency of
automation technologies and positively influence the consumer experience.
These include making AI-supported automation systems more transparent,
taking responsibility for errors, ensuring that human-assisted services are
not completely disabled. Furthermore, increasing the AI knowledge and
skills of consumers and managers will enable more informed and ethical
use. For AI-supported automation system developers’ algorithms to provide
more impartial and fair solutions, increasing the variety of data sets and
having audits will increase reliability. In addition, not completely disabling
the human factor in AI-supported services plays a critical role in increasing
consumer satisfaction.
In this context, although automation technologies create great possibilities
in customer experience in the field of marketing, ethical and correct use
of these technologies needs to be considered. A human-centred fair AI
approach can create a more sustainable, reliable and digital ecosystem. To
make the most of AI and automation technologies, both developers and users
should not ignore the risks and limitations of these systems. Optimising the
advantages of technological systems with ethical, fair practices is essential for
a sustainable digital future.

5. Declaration of Interest
“No conflicts of interest exist.”

6. Declaration of generative AI and AI-assisted technologies in the


writing process
“During the preparation of this work the author(s) used ChatGPT 4o to
improve language and readability with caution. After using this tool/service,
the author(s) reviewed and edited the content as needed and take(s) full
responsibility for the content of the publication.”
Ali Sen | 91

References
Abrokwah-Larbi, K., & Awuku-Larbi, Y. (2024). The impact of artificial intel-
ligence in marketing on the performance of business organizations: Evi-
dence from SMEs in an emerging economy. Journal of Entrepreneurship in
Emerging Economies, 16(4), 1090–1117.
Adamopoulou, E., & Moussiades, L. (2020). Chatbots: History, technology,
and applications. Machine Learning with Applications, 2, Article
Aguirre, S., & Rodriguez, A. (2017). Automation of a business process using
robotic process automation (RPA): A case study. In Applied Computer
Sciences in Engineering: 4th Workshop on Engineering Applications, WEA
2017, Cartagena, Colombia, September 27-29, 2017, Proceedings 4 (pp. 65–
71). Springer International Publishing.
Akter, S., Dwivedi, Y. K., Biswas, K., Michael, K., Bandara, R. J., & Sajib,
S. (2021a). Addressing algorithmic bias in AI-driven customer manage-
ment. Journal of Global Information Management, 29(6), 1–27.
Akter, S., Dwivedi, Y. K., Sajib, S., Biswas, K., Bandara, R. J., & Michael, K.
(2022). Algorithmic bias in machine learning-based marketing models.
Journal of Business Research, 144, 201–216.
Akter, S., McCarthy, G., Sajib, S., Michael, K., Dwivedi, Y. K., D’Ambra, J.,
& Shen, K. N. (2021b). Algorithmic bias in data-driven innovation in
the age of AI. International Journal of Information Management, 60,
102387.
Akter, S., Sultana, S., Mariani, M., Wamba, S. F., Spanaki, K., & Dwivedi, Y. K.
(2023). Advancing algorithmic bias management capabilities in AI-dri-
ven marketing analytics research. Industrial Marketing Management,
114, 243–261.
André, Q., Carmon, Z., Wertenbroch, K., Crum, A., Frank, D., Goldstein, W.,
... & Yang, H. (2018). Consumer choice and autonomy in the age of arti-
ficial intelligence and big data. Customer Needs and Solutions, 5, 28–37.
Angwin, J., Tobin, A., & Varner, M. (2017). Facebook (still) let-
ting housing advertisers exclude users by race. ProPub-
lica. [Link]
discrimination-housing-race-sex-national-origin.
BBC News. (2012, June 26). Travel site Orbitz offers Mac users more costly
hotels. BBC News. [Link]
Berger, B., Adam, M., Rühr, A., & Benlian, A. (2021). Watch me improve—
Algorithm aversion and demonstrating the ability to learn. Business &
Information Systems Engineering, 63(1), 55–68.
Bond, R. R., Novotny, T., Andrsova, I., Koc, L., Sisakova, M., Finlay, D., ... &
Malik, M. (2018). Automation bias in medicine: The influence of auto-
92 | The Misleading Power of AI-Powered Automation

mated diagnoses on interpreter accuracy and uncertainty when reading


electrocardiograms. Journal of Electrocardiology, 51(6), S6–S11.
Boomerang. (n.d.). Boomerang for Gmail homepage. Retrieved December 10,
2024, from [Link]
Castillo, D., Canhoto, A. I., & Said, E. (2021). The dark side of AI-powered
service interactions: Exploring the process of co-destruction from the cus-
tomer perspective. The Service Industries Journal, 41(13–14), 900–925.
Chen, C. (2024). How consumers respond to service failures caused by algorit-
hmic mistakes: The role of algorithmic interpretability. Journal of Busi-
ness Research, 176, 114610.
Chen, J. S., Tran-Thien-Y, L., & Florence, D. (2021). Usability and responsi-
veness of artificial intelligence chatbot on online customer experience in
e-retailing. International Journal of Retail & Distribution Management,
49(11), 1512–1531.
Crolic, C., Thomaz, F., Hadi, R., & Stephen, A. T. (2022). Blame the bot:
Anthropomorphism and anger in customer–chatbot interactions. Journal
of Marketing, 86(1), 132– 148.
Daase, C., Staegemann, D., Volk, M., Nahhas, A., & Turowski, K. (2020). Au-
tomation of customer-initiated back-office processes: A design science re-
search approach to link robotic process automation and chatbots. [Jour-
nal Name if available], [Volume if available], [Page numbers if available].
Danelid, F. (2024). Automation bias in public sector decision-making: A systematic
review (Master’s thesis). [Umeå University]
Dastin, J. (2018, [Month Day]). Amazon scraps secret AI recruiting tool that
showed bias against women. Reuters. [Link]
Ducheneaut, N., & Bellotti, V. (2001). E-mail as habitat: An exploration of
embedded personal information management. Interactions, 8(5), 30–38.
Eren, B. A. (2021). Determinants of customer satisfaction in chatbot use: evi-
dence from a banking application in Turkey. International
Journal of Bank Marketing, 39(2), 294- 311.
Farbod, S. (2024). Exploring the dark side of AI-enabled services: Impacts on custo-
mer experience and well-being [Manuscript in preparation].
Gavrila Gavrila, S., Blanco González-Tejero, C., Gómez Gandía, J. A., & de
Lucas Ancillo, A. (2023). The impact of automation and optimization
on customer experience: A consumer perspective. Humanities and Social
Sciences Communications, 10(1), 1– 10.
Ghalme, S., Shelke, K., Kadam, R., & Tupe, U. (2023, June). Automated ema-
ils and data segregation using Python. In 2023 International Conferen-
ce on Sustainable Computing and Smart Systems (ICSCSS) (pp. 1701–
1705). IEEE.
Ali Sen | 93

Goddard, K., Roudsari, A., & Wyatt, J. C. (2012). Automation bias: A syste-
matic review of frequency, effect mediators, and mitigators. Journal of the
American Medical Informatics Association, 19(1), 121–127.
Goldberg, K. (2011). What is automation? IEEE Transactions on Automation
Science and Engineering, 9(1), 1–2.
Grevet, C., Choi, D., Kumar, D., & Gilbert, E. (2014, April). Overload is over-
loaded: Email in the age of Gmail. In Proceedings of the SIGCHI Confe-
rence on Human Factors in Computing Systems (pp. 793–802).
Griffith, E. (2017, September 23). 10 embarrassing algorithm fails. PCMag. ht-
tps://[Link]/feature/356387/10-embarrassing-algorithm-fails
Groover, M. P. (2016). Automation, production systems, and computer-integra-
ted manufacturing (4th ed.). Pearson Education India.
Gupta, D., & Krishnan, T. S. (2020). Algorithmic bias: Why bother. California
Management Review, 63(3), 1–7.
Hagen, L., Uetake, K., Yang, N., Bollinger, B., Chaney, A. J., Dzyabura, D.,
& Zhu, Y. (2020). How can machine learning aid behavioral marketing
research? Marketing Letters, 31, 361–370.
He, X., Liao, L., Zhang, H., Nie, L., Hu, X., & Chua, T. S. (2017, April). Neu-
ral collaborative filtering. In Proceedings of the 26th international confe-
rence on world wide web (pp. 173-182).
Hitomi, K. (1994). Automation—Its concept and a short history. Technovati-
on, 14(2), 121–128.
Huseynov, F. (2023). Chatbots in digital marketing: Enhanced customer expe-
rience and reduced customer service costs. In Contemporary approaches
of digital marketing and the role of machine intelligence (pp. 46–72).
IGI Global.
Janssen, C. P., Donker, S. F., Brumby, D. P., & Kun, A. L. (2019). History and
future of human- automation interaction. International Journal of Hu-
man-Computer Studies, 131, 99– 107.
Kaczorowska-Spychalska, D. (2019). How chatbots influence marketing. Mana-
gement, 23(1), [251-268]. [Link]
Kaplan, A., & Haenlein, M. (2019). Siri, Siri, in my hand: Who’s the fairest in
the land? On the interpretations, illustrations, and implications of artifi-
cial intelligence. Business Horizons, 62(1), 15–25.
Karami, A., Shemshaki, M., & Ghazanfar, M. (2024). Exploring the ethical
implications of AI-powered personalization in digital marketing. Data
Intelligence. Advance online publication.
Karau, S. J., & Williams, K. D. (1993). Social loafing: A meta-analytic review
and theoretical integration. Journal of Personality and Social Psychology,
65(4), 681–706.
94 | The Misleading Power of AI-Powered Automation

Kasperkevic, J. (2015, July 2). Google says sorry for racist auto-tag in pho-
to app. The Guardian. [Link]
jul/01/google-sorry-racist- auto-tag-photo-app
Kokkinou, A., & Cranage, D. A. (2013). Using self-service technology to redu-
ce customer waiting times. International Journal of Hospitality Manage-
ment, 33, 435– 445.
König, M., Bein, L., Nikaj, A., & Weske, M. (2020). Integrating robotic process
automation into business process management. In Business Process Ma-
nagement: Blockchain and Robotic Process Automation Forum: BPM
2020 Blockchain and RPA Forum, Seville, Spain, September 13–18,
2020, Proceedings 18 (pp. 132–146). Springer International Publishing.
Kupfer, C., Prassl, R., Fleiß, J., Malin, C., Thalmann, S., & Kubicek, B. (2023).
Check the box! How to deal with automation bias in AI-based personnel
selection. Frontiers in Psychology, 14, 1118723.
Kushmerick, N., & Lau, T. (2005, January). Automated email activity mana-
gement: An unsupervised learning approach. In Proceedings of the 10th
International Conference on Intelligent User Interfaces (pp. 67–74).
Kvale, K., Freddi, E., Hodnebrog, S., Sell, O. A., & Følstad, A. (2020, Novem-
ber). Understanding the user experience of customer service chatbots:
what can we learn from customer satisfaction surveys?. In International
Workshop on Chatbot Research and Design (pp. 205-218). Cham: Springer
International Publishing.
Lakshmi, N. D. R. T. V., Yadav, N. L. S., & Reddy, N. M. A. (2024). Robotic
process automation in banking for better customer experience. Deleted
Journal, 2(07), 2027–2029. [Link]
Lepri, B., Oliver, N., Letouzé, E., Pentland, A., & Vinck, P. (2018). Fair, trans-
parent, and accountable algorithmic decision-making processes: The pre-
mise, the proposed solutions, and the open challenges. Philosophy &
Technology, 31(4), 611– 627.
Longoni, C., Cian, L., & Kyung, E. J. (2023). Algorithmic transference: People
overgeneralize failures of AI in the government. Journal of Marketing
Research, 60(1), 170–188.
Madakam, S., Holmukhe, R. M., & Jaiswal, D. K. (2019). The future digital
workforce: Robotic process automation (RPA). JISTEM-Journal of In-
formation Systems and Technology Management, 16, e201916001.
Maedche, A., Legner, C., Benlian, A., Berger, B., Gimpel, H., Hess, T., ... &
Söllner, M. (2019). AI-based digital assistants: Opportunities, threats,
and research perspectives. Business & Information Systems Engineering, 61,
535-544.
Mahmood, A., Fung, J. W., Won, I., & Huang, C. M. (2022, April). Owning
mistakes sincerely: Strategies for mitigating AI errors. In Proceedings of
Ali Sen | 95

the 2022 CHI Conference on Human Factors in Computing Systems


(pp. 1–11).
Mane, S., & Rayappa, B. V. (2022). Customized Automated Email Bot. In-
ternational Journal for Research in Applied Science & Engineering Te-
chnology (IJRASET), 10(1), 1656-1658. [Link]
ijraset.2022.40083
Mehrotra, S., Degachi, C., Vereschak, O., Jonker, C. M., & Tielman, M. L.
(2023). A systematic review on fostering appropriate trust in hu-
man-AI interaction. arXiv Preprint, arXiv:2311.06305. [Link]
abs/2311.06305
Mikalef, P., Conboy, K., & Krogstie, J. (2021). Artificial intelligence as an enab-
ler of B2B marketing: A dynamic capabilities micro-foundations approa-
ch. Industrial Marketing Management, 98, 80–92.
Mirwan, S. H. (2023). Using artificial intelligence (AI) in developing marke-
ting strategies. International Journal of Applied Research and Sustainable
Sciences (IJARSS), 1, 225–238.
Molnár, G., & Szüts, Z. (2018, September). The role of chatbots in formal
education. In 2018 IEEE 16th International Symposium on Intelligent
Systems and Informatics (SISY) (pp. 000197–000202). IEEE.
Mosier, K. L., & Skitka, L. J. (2018). Human decision makers and automated
decision aids: Made for each other? In Automation and human perfor-
mance (pp. 201–220). CRC Press.
Palanque, P., Campos, P. F., Nocera, J. A., Clemmensen, T., & Roto, V. (2019).
User experience in an automated world. In Human-Computer Interacti-
on–INTERACT 2019: 17th IFIP TC 13 International Conference, Pap-
hos, Cyprus, September 2–6, 2019, Proceedings, Part IV (pp. 706–710).
Springer International Publishing.
Panch, T., Mattie, H., & Atun, R. (2019). Artificial intelligence and algorith-
mic bias: implications for health systems. Journal of global health, 9(2),
020318.
Parasuraman, R., & Manzey, D. H. (2010). Complacency and bias in human use of
automation: An attentional integration. Human Factors, 52(3), 381–410.
Parasuraman, R., & Riley, V. (1997). Humans and automation: Use, misuse,
disuse, abuse. Human Factors, 39(2), 230–253.
Park, S., Zhang, A. X., Murray, L. S., & Karger, D. R. (2019, May). Opportu-
nities for automating email processing: A need-finding study. In Proce-
edings of the 2019 CHI Conference on Human Factors in Computing
Systems (pp. 1– 12).
Patel, P. (2019, April 25). Women less likely to be shown ads for high-paying
jobs. IEEE Spectrum. [Link]
be-shown-ads-for- highpaying-jobs
96 | The Misleading Power of AI-Powered Automation

Personal Data Protection Commission Singapore. (2018). Discussion paper on


artificial intelligence (AI) and personal data – Fostering responsible deve-
lopment and adoption of AI. [Link]
Pricilla, C., Lestari, D. P., & Dharma, D. (2018, August). Designing interaction
for chatbot-based conversational commerce with user-centered design. In
2018 5th International Conference on Advanced Informatics: Concept
Theory and Applications (ICAICTA) (pp. 244–249). IEEE.
Rae, T. (2016). The effect of marketing automation on customer experience
(Bachelor’s thesis). [Aalto University],
Ribeiro, J., Lima, R., Eckhardt, T., & Paiva, S. (2021). Robotic process auto-
mation and artificial intelligence in Industry 4.0–A literature review. Pro-
cedia Computer Science, 181, 51–58.
Sen, S., Dasgupta, D., & Gupta, K. D. (2020, July). An empirical study on
algorithmic bias. In 2020 IEEE 44th Annual Computers, Software, and
Applications Conference (COMPSAC) (pp. 1189–1194). IEEE.
Shi, Y., Larson, M., & Hanjalic, A. (2014). Collaborative filtering beyond the
user-item matrix: A survey of the state of the art and future challen-
ges. ACM Computing Surveys (CSUR), 47(1), 1-45.
Shin, D., & Shin, E. Y. (2023). Data’s impact on algorithmic bias. Computer,
56(6), 90–94.
Singer, N. (2018, July 24). Amazon is pushing facial technology that a study
says could be biased. The New York Times. [Link]
[full-article-link-if-available]
Singh, S., & Namekar, S. (2020). A review on automation of industries. Inter-
national Journal of Engineering Applied Sciences and Technology, 4(12),
298–300.
Skitka, L. J., Mosier, K. L., Burdick, M., & Rosenblatt, B. (2000). Automati-
on bias and errors: Are crews better than individuals? The International
Journal of Aviation Psychology, 10(1), 85–97.
Srinivasan, R., & Sarial-Abi, G. (2021). When algorithms fail: Consumers’
responses to brand harm crises caused by algorithm errors. Journal of
Marketing, 85(5), 74–91.
Su, X., & Khoshgoftaar, T. M. (2009). A survey of collaborative filtering tech-
niques. Advances in artificial intelligence, 2009(1), 421425.
Thiem, A., Mkrtchyan, L., Haesebrouck, T., & Sanchez, D. (2020). Algorithmic
bias in social research: A meta-analysis. PLOS ONE, 15(6), e0233625.
Van Esch, P., & Stewart Black, J. (2021). Artificial intelligence (AI): Revo-
lutionizing digital marketing. Australasian Marketing Journal, 29(3),
199–203.
Ali Sen | 97

Vaughan, P. (2012, September 26). Email marketing automa-


tion: Examples and strategies. HubSpot Blog. Upda-
ted April 18, 2023. [Link]
marketing-automation-examples#what-is-email-marketing-auto
Vered, M., Livni, T., Howe, P. D. L., Miller, T., & Sonenberg, L. (2023). The
effects of explanations on automation bias. Artificial Intelligence, 322,
103952.
Vizard, S. (2017, December 30). The brand safety fallout: Three in four marke-
ters say brand reputation has taken a hit. Marketing Week. [Link]
[Link]/2017/09/26/brand-safety-fallout/
Volkmar, G., Fischer, P. M., & Reinecke, S. (2022). Artificial intelligence and
machine learning: Exploring drivers, barriers, and future developments
in marketing management. Journal of Business Research, 149, 599–614.
Weissman, C. G. (2018, October 10). Amazon created a hiring tool using A.I.
It immediately started discriminating against women. Slate. [Link]
com/business/2018/10/amazon-artificial-intelligence-hiring- d i s c r i m i -
[Link]
Wen, Y., & Holweg, M. (2024). A phenomenological perspective on AI ethical
failures: The case of facial recognition technology. AI & Society, 39(4),
1929– 1946.
Wertenbroch, K. (2019). From the editor: A manifesto for research on automa-
tion in marketing and consumer behavior. Journal of Marketing Behavi-
or, 4(1), 1–10.
Xu, Y., Shieh, C. H., van Esch, P., & Ling, I. L. (2020). AI customer service:
Task complexity, problem-solving ability, and usage intention. Australasi-
an Marketing Journal, 28(4), 189–199.
Yang, B., Han, C., Li, Y., Zuo, L., & Yu, Z. (2021). Improving conversational re-
commendation systems’ quality with context-aware item meta informati-
on. arXiv Preprint, arXiv:2112.08140. [Link]
Yıldıran, Y. D., & Erdem, Ş. (2024). Yapay zeka tabanlı chatbot hizmetinin kul-
lanıcı alışkanlık ve davranışları üzerine etkileri ve bir uygulama. Marmara
Üniversitesi İktisadi ve İdari Bilimler Dergisi, 46(1), [page numbers if
available].
Zhang, R. W., Liang, X., & Wu, S. H. (2024). When chatbots fail: Exploring
user coping following a chatbot-induced service failure. Information Te-
chnology & People, 37(8), 175–195.
Zhou, J., Wang, B., He, R., & Hou, Y. (2021, November). CRFR: Improving
conversational recommender systems via flexible fragments reasoning on
knowledge graphs. In Proceedings of the 2021 Conference on Empirical
Methods in Natural Language Processing (pp. 4324–4334).
98 | The Misleading Power of AI-Powered Automation
Chapter 7

AI, Addiction, and Consumer Well-Being

Dicle Yurdakul1

Abstract
The increasing integration of Artificial Intelligence (AI) in marketing has
transformed consumer engagement, enabling hyper-personalization and
predictive analytics. While AI-driven marketing enhances efficiency and user
experience, it also raises ethical concerns regarding consumer autonomy,
addiction and manipulation. This chapter explores the ethical implications of
AI in marketing, emphasizing the role of AI in shaping consumer behaviors
through personalized content, algorithmic decision-making and targeted
advertisements. It discusses the neurological and psychological mechanisms
underlying consumer addiction, the cognitive and emotional effects of AI-
driven marketing and the broader social and economic consequences.
To address these concerns, the chapter proposes solutions in three key areas:
corporate responsibility, consumer awareness and policy-level interventions.
Ethical AI marketing requires companies to adopt transparent algorithms,
mitigate biases and implement responsible data practices. Empowering
consumers through digital literacy initiatives and promoting digital well-
being strategies can enhance their ability to navigate AI-driven content
critically. Additionally, regulatory frameworks and industry-wide best
practices are necessary to establish accountability, ensure fair marketing
practices and protect consumer rights.
By fostering an ethical approach to AI in marketing, stakeholders can balance
innovation with consumer well-being, creating a sustainable and equitable
digital marketplace. This chapter highlights the need for collaborative efforts
among businesses, policymakers and researchers to ensure that AI technologies
promote ethical consumer interactions while mitigating potential harms.

1 Doç. Dr., Altınbaş Üniversitesi, [Link]@[Link],


ORCID ID 0000-0001-9026- 8606

[Link]
99
100 | AI, Addiction, and Consumer Well-Being

1. Introduction
Artificial Intelligence (AI) has significantly transformed the marketing
landscape, enabling businesses to deliver unparalleled levels of personalization
and efficiency. Through advanced data analytics and machine learning
algorithms, AI allows companies to understand consumer behavior more
accurately and engage with their audiences in highly targeted ways. However
despite its benefits, AI-driven marketing raises significant concerns regarding
consumer addiction and ethical implications. As AI systems become more
sophisticated in predicting and influencing human behavior, questions
arise about the extent to which they manipulate consumer choices and
foster unhealthy consumption habits. This chapter explores the increasing
influence of AI-driven marketing on consumer addiction within digital
contexts, and examines the ethical considerations surrounding AI’s role in
shaping consumer behavior.

1.1. Overview of AI-driven marketing and its increasing influence


AI-driven marketing leverages machine learning algorithms and data
analytics to tailor strategies to individual consumer preferences, enhancing
both user experience and engagement. Unlike traditional marketing
approaches that rely on broad demographic categories, AI enables hyper-
personalization by analyzing vast amounts of consumer data in real time.
This capability allows marketers to anticipate consumer needs and deliver
personalized content, thereby increasing conversion rates and fostering
long-term customer relationships.
One of the key advantages of AI in marketing is its ability to optimize
decision-making. AI systems can process immense amounts of data which
allows marketers to adjust campaigns dynamically based on consumer
responses (Bhargava & Velasquez, 2020). This agility enhances marketing
effectiveness by ensuring that campaigns remain relevant and adaptive to
shifting consumer trends. Moreover, AI-powered automation streamlines
marketing processes, reducing operational costs while improving efficiency.
AI also plays a crucial role in predictive analytics to forecast purchasing
behaviors and tailor marketing strategies accordingly. By identifying patterns
in consumer behavior AI can determine which products or services are most
likely to appeal to specific audiences, leading to more targeted advertising
efforts. This predictive capability not only enhances marketing efficiency but
also creates a seamless shopping experience for consumers.
Furthermore, AI has transformed customer interactions through chatbots
and virtual assistants. These AI-driven tools reduce the burden on human
Dicle Yurdakul | 101

customer service representatives and improve overall customer satisfaction.


AI-powered recommendation engines enhance user engagement by
suggesting products or content based on past behavior, further demonstrating
AI’s growing influence in shaping consumer choices.
On the other hand, the increasing reliance on AI in marketing also raises
concerns about its impact on consumer autonomy. While AI enhances
marketing precision, it also has the potential to manipulate consumer
decision-making by exploiting psychological triggers. This raises ethical
questions about whether AI-driven marketing strategies prioritize business
profits over consumer well-being. The subsequent sections delve deeper
into the concept of consumer addiction in digital marketing and the ethical
dilemmas associated with AI’s influence.

1.2. Definition of consumer addiction in digital marketing


Consumer addiction in digital marketing refers to compulsive engagement
with digital platforms and content, often fueled by AI-driven strategies
designed to maximize user attention and interaction. AI algorithms play a
crucial role in this phenomenon by curating personalized content that aligns
with individual preferences which creates an environment where consumers
remain engaged for long periods of time. This is evident in social media
where AI continuously analyzes user behavior to deliver tailored content,
reinforcing engagement and potential addiction (Bhargava & Velasquez,
2020).
One of the most concerning aspects of AI-driven marketing is its ability
to predict and influence consumer behavior. The concept of “predictive
buying” exemplifies this, where AI anticipates consumer needs and presents
products or services before the consumer actively searches for them (Kumar
et al., 2024). While this capability enhances convenience, it also fosters
compulsive purchasing behaviors by reducing the level of conscious decision-
making involved in the buying process. Consumers may find themselves
repeatedly engaging with digital platforms and making impulsive purchase
due to the persuasive power of AI-driven recommendations.
The addictive nature of AI-powered marketing strategies is further
exacerbated by gamification techniques that encourage repeated
engagement. Many digital platforms incorporate reward-based systems,
such as personalized notifications, incentives and limited-time offers, all of
which leverage psychological principles to sustain user interaction. These
strategies create dopamine-driven reward loops that make it increasingly
102 | AI, Addiction, and Consumer Well-Being

difficult for users to disengage which further contributes to behavioral


addiction patterns.

2. The Science of Consumer Addiction

2.1. Neurological and Psychological Mechanisms


Consumer addiction refers to encompassing behaviors such as compulsive
shopping, excessive gambling and overindulgence in digital media. As this
relatively newer form of addiction has become a significant concern in
contemporary society, understanding the neurological and psychological
mechanisms underlying these behaviors becomes crucial,
The brain’s reward system, and in particular the mesolimbic dopamine
pathway, is central to the development of addictive behaviors. This pathway
is in the ventral tegmental area (VTA) and projects to the nucleus accumbens
(NAcc) which is a region critical for processing reward and reinforcement
(Volkow & Morales, 2015). Engaging in rewarding activities such as
shopping, gambling or consuming digital content stimulates dopamine
release in the NAcc, producing pleasurable sensations and reinforcing the
behavior (Kalivas, 2009). Repeated exposure to these stimuli strengthens
neural pathways associated with compulsive behavior and leads to
sensitization and habit formation (Robinson & Berridge, 1993).
Neuroadaptations in the reward system further contribute to consumer
addiction. Studies indicate that prolonged engagement with addictive
stimuli leads to an overexpression of ΔFosB, a transcription factor associated
with heightened sensitivity to rewards (Nestler, 2014). Increased ΔFosB
levels enhance motivation for addictive stimuli while diminishing interest in
naturally rewarding activities (Leyton & Vezina, 2013). This phenomenon is
particularly relevant in digital media addiction where individuals repeatedly
seek out social validation through likes and notifications, mirroring the
reinforcement mechanisms observed in substance addiction (Berridge &
Robinson, 2016).
Another key neurological factor is the impairment of the prefrontal cortex
which is the region that is responsible for decision-making and impulse
control. fMRI studies reveal that individuals with compulsive consumption
tendencies exhibit decreased activity in the prefrontal cortex which impairs
their ability to regulate cravings and resist impulsive behaviors (Goldstein &
Volkow, 2011). This dysfunction explains why individuals with shopping
addiction or gambling disorders continue engaging in excessive spending
despite negative consequences.
Dicle Yurdakul | 103

Additionally, the role of stress and negative affect in consumer addiction


has been well-documented. The amygdala, which is a brain region
involved in processing emotions, interacts with the reward system to drive
compulsive behaviors in response to stress or anxiety (Koob & Volkow,
2016). This interaction explains why individuals often resort to retail
therapy or digital escapism as coping mechanisms for emotional distress.
Studies suggest that chronic stress increases susceptibility to addiction by
altering dopaminergic transmission which makes individuals more prone to
compulsive consumption (Everitt & Robbins, 2016).
Beyond neural mechanisms, psychological factors also play a crucial role
in consumer addiction. One of the primary drivers is emotional regulation,
where individuals engage in addictive behaviors to cope with stress, anxiety,
or depression (Kardefelt-Winther, 2014). For example, shopping addiction
is often linked to mood regulation where individuals experience temporary
relief from negative emotions through purchasing. However this relief is
short-term and leads to a cycle of compulsive spending and subsequent guilt
(Dittmar, 2005).
Cognitive biases also contribute to addictive consumer behavior.
For example, the illusion of control which is a common bias observed in
gambling addiction, leads individuals to believe they can influence outcomes
despite random chance (Goodie & Fortune, 2013). Similarly, compulsive
shoppers exhibit optimism bias as they overestimate the long-term benefits
of purchases while underestimating the negative financial impact (Dittmar,
2005). These cognitive distortions reinforce addictive behaviors by justifying
repeated engagement with the addictive stimulus.
Furthermore, personality traits such as impulsivity and sensation-seeking
are strong predictors of consumer addiction. Studies indicate that individuals
high in impulsivity struggle with delayed gratification and are likely to
engage in compulsive consumption (Zuckerman & Kuhlman, 2000).
Sensation-seeking people who crave novel and stimulating experiences are
particularly susceptible to marketing tactics that exploit their desire for
excitement (Leyton & Vezina, 2013). These personality-driven tendencies
explain why certain individuals are more vulnerable to compulsive shopping
and digital addiction.
Another psychological factor influencing consumer addiction is social
influence and peer pressure. Research suggests that social norms and
perceived expectations significantly impact consumption behavior (Dittmar,
2005). The rise of influencer culture and targeted digital advertising has
worsen this effect, making individuals more likely to engage in excessive
104 | AI, Addiction, and Consumer Well-Being

consumption to conform to societal trends (Berridge & Robinson, 2016).


This phenomenon is particularly evident in social media-driven consumerism
where individuals seek validation through material possessions.

3. AI, Personalization, and Addiction Triggers

3.1. Personalized Marketing & Algorithmic Manipulation


The introduction of Artificial Intelligence (AI) has revolutionized
personalized marketing by enabling the analysis of vast datasets to tailor
content and advertisements to individual preferences. This personalization
enhances user engagement and drives consumer behavior. However it also
raises concerns about algorithmic manipulation where AI systems exploit
cognitive biases to influence consumer decisions, potentially leading to
addictive behaviors.​
AI-driven personalized marketing leverages machine learning algorithms
to predict consumer preferences and deliver targeted content. By analyzing
user behavior, purchase history, and online interactions, AI systems can
create detailed consumer profiles, which allow marketers to customize
advertisements and recommendations (Shin & Park, 2019). This level of
personalization can enhance user experience by presenting relevant products
or services, thereby increasing the likelihood of engagement and conversion.​
However the same algorithms that facilitate personalization can also
be used to manipulate consumer behavior. Algorithmic manipulation
involves designing AI systems to exploit cognitive biases and psychological
vulnerabilities. This may lead to nudging consumers toward decisions that
may not align with their best interests (Zuboff, 2019). For instance AI can
identify users susceptible to impulse buying and strategically present limited-
time offers to encourage immediate purchases. This practice raises ethical
concerns as it blurs the line between persuasive marketing and exploitation.​
Moreover the lack of clarity of many AI algorithms poses challenges in
detecting and regulating such manipulative practices. Consumers are often
unaware of the extent to which their data is collected and utilized to influence
their decisions. This lack of transparency undermines consumer autonomy
(Pasquale, 2015). Furthermore the continuous exposure to personalized
content may create echo chambers, which in return, contributes to addictive
engagement with specific platforms or content types (Pariser, 2011).
Dicle Yurdakul | 105

3.2. Dark Patterns in Marketing


Dark patterns refer to user interface designs crafted to manipulate users
into actions they might not have intended, benefiting the service provider
at the user’s expense. These deceptive designs exploit human psychology,
leading to unintended subscriptions, purchases, or data sharing (Gray et
al., 2018). In the context of AI-driven personalization dark patterns can be
seamlessly integrated into digital interfaces, making them more effective and
harder to detect.
Common examples of dark patterns include disguised advertisements
that appear as regular content and misleading opt-out options that make
declining services cumbersome (Mathur et al., 2019). When combined with
AI these patterns can be personalized based on user behavior. For instance,
an AI system might detect a user’s hesitation during a purchase and deploy
a pop-up offering for a limited-time discount, pressuring the user into
completing the transaction.​
The integration of dark patterns in AI-driven marketing strategies raises
significant ethical and legal issues. Such practices can lead to consumer
harm including financial loss and compromised privacy. Moreover they
erode trust in digital platforms and can have long-term negative impacts on
user well-being (Bösch et al., 2016). Regulatory bodies have begun to take
action against the detrimental effects of these dark patterns. For example the
European Union’s General Data Protection Regulation (GDPR) mandates
that consent for data collection “must be freely given, specific, informed and
unambiguous”. This regulation aims to stop the use of deceptive designs
that forces users into data sharing (Utz et al., 2019).​

4. Mental Health Implications


As AI-driven consumer experiences become increasingly personalized
and engaging, the potential for addiction and its subsequent mental health
effects continues to grow. The intersection of technology, marketing
strategies, and consumer behavior has led to a landscape where individuals
are exposed to persistent stimuli designed to capture attention and influence
decisions. While these innovations offer convenience and personalization,
they also contribute to cognitive overload, emotional distress and long-term
psychological consequences. Understanding these mental health implications
is crucial for addressing the risks associated with AI-driven consumerism
and formulating interventions that promote healthier engagement.
106 | AI, Addiction, and Consumer Well-Being

4.1. Cognitive and Emotional Effects


The psychological impact of consumer addiction extends beyond mere
habits as it influences cognitive functions and emotional well-being. One of the
primary cognitive effects of excessive consumerism is decision fatigue where
individuals become mentally exhausted from the constant bombardment
of choices presented through algorithmic recommendations (Baumeister
et al., 2008). AI-driven platforms optimize engagement by continuously
suggesting products and services tailored to user preferences. However this
persistent exposure to choices can damage cognitive processing and reduce
individuals’ ability to make rational decisions over time (Schwartz, 2004).
Moreover dopaminergic reinforcement mechanisms play a crucial role
in the emotional impact of consumer addiction. Research has shown that
online shopping and social media engagement stimulate brain’s reward
system in a manner similar to substance addiction (Montag et al., 2019).
The anticipation of a purchase, the act of acquiring a product or receiving
positive social feedback (such as likes and comments) triggers dopamine
release which may reinforce compulsive engagement. Over time individuals
may experience tolerance, where they require more frequent or intense
engagement to achieve the same level of satisfaction (Volkow et al., 2016).
Additionally, mood disorders such as anxiety and depression are closely
linked to compulsive consumer behaviors. Studies indicate that individuals
who engage in excessive shopping or digital consumption often do so as
a coping mechanism for stress or negative emotions (Dittmar, 2005).
However, rather than alleviating distress, these behaviors often intensify
underlying psychological issues. The pleasure derived from consumer-driven
activities is frequently followed by post-purchase regret, financial guilt and
increased emotional distress (Ridgway et al., 2008). Problematic smartphone
use is associated with reduced attention span, memory deficits and difficulty
in emotional regulation (Elhai et al., 2017). Excessive engagement with
algorithmically curated content on social media platforms leads to a
dichotomy between real-life experiences and curated online personas,
fostering low self-esteem and social comparison (Vogel et al., 2014). This
negative self-perception can contribute to emotional dysregulation and may
reinforce cycles of compulsive digital engagement.

4.2. Social and Economic Consequences


Beyond the individual level, the mental health implications of consumer
addiction extend to broader societal and economic concerns. One of the most
profound social consequences is the erosion of meaningful interpersonal
Dicle Yurdakul | 107

relationships. As individuals become more absorbed in personalized digital


experiences, real-life social interactions often diminish, which may lead
to increased loneliness and isolation (Twenge et al., 2018). Research has
demonstrated that compulsive social media use can lead to withdrawal from
face-to-face communication, weakening the quality of personal relationships
and reducing overall well-being (Keles et al., 2020).
Financial distress is a direct consequence of compulsive consumer
behaviors with significant implications for mental health. AI-driven
marketing strategies that exploit impulse-driven purchasing can lead to
accumulated debt and chronic stress. Studies have found that individuals
struggling with compulsive buying disorder often experience depression and
anxiety exacerbated by the overwhelming burden of financial obligations
(Müller et al., 2015).
From a broader perspective, the socioeconomic gap exacerbated by AI-
driven consumerism is another growing concern. Personalized advertising
disproportionately targets low-income consumers who are more susceptible
to manipulative marketing tactics and financially risky consumption habits
(Newman et al., 2018). This dynamic contributes to a cycle of economic
inequality where financially vulnerable individuals are more likely to engage
in addictive spending behaviors which further entrenches them in financial
instability (Himmelstein et al., 2019).

5. Moving Towards Ethical AI Marketing


The previous discussions have highlighted the profound implications
of AI-driven personalization and its potential to manipulate consumer
behavior exacerbate mental health issues and widen social and economic
[Link] addictive nature of personalized marketing, combined with
the exploitation of cognitive biases and the proliferation of dark patterns,
underscores the urgent need for ethical frameworks to govern the use of
AI in marketing. Without intervention, these practices risk deepening
consumer harm, eroding trust in digital platforms and perpetuating cycles of
inequality. By aligning technological advancements with ethical principles,
it is possible to mitigate the negative consequences of AI-driven marketing
while fostering a more equitable and sustainable digital ecosystem.

5.1. Corporate Responsibility & AI Ethics


The ethical use of AI in marketing begins with corporate accountability.
Businesses must adopt practices that prioritize transparency, fairness and
consumer well-being over short-term profits. One of the primary ethical
108 | AI, Addiction, and Consumer Well-Being

concerns in AI marketing is the “black box” nature of algorithms, which


poses significant challenges to consumer autonomy. Consumers are often
unaware of how their data is used to influence their decisions, undermining
their ability to make informed choices (Pasquale, 2015). To address this,
companies must adopt transparent AI systems that allow stakeholders
to understand how data is collected, processed, and used. Implementing
Explainable AI (XAI) tools can help demystify AI-driven decisions,
enabling consumers to understand why they are targeted with specific ads
or recommendations (Gunning et al., 2019). For example, providing users
with clear explanations for personalized content can enhance trust and
accountability. Additionally, companies should explicitly inform users when
AI is used in marketing campaigns and how their data is being utilized.
This can be achieved through transparent privacy policies and user-friendly
consent mechanisms (Martin, 2018).
Another critical aspect of corporate responsibility is mitigating bias in
AI systems. AI algorithms often bring about biases present in the data used
for training, which results to discriminatory outcomes. For instance, certain
demographics may be excluded from seeing job ads or offered higher-priced
products (Mehrabi et al., 2021). To combat this, companies must ensure that
their training data is representative of diverse populations. Regular audits of
data sets can identify and correct imbalances, while building diverse teams
of data scientists, marketers, and ethicists can help identify potential biases
during the development phase (Holstein et al., 2019).
Respecting consumer privacy is another fundamental pillar of ethical
AI marketing. Companies must adopt robust measures to protect personal
data and prevent misuse. Integrating privacy considerations into the design
of AI systems, often referred to as Privacy by Design, ensures that data
protection is prioritized from the outset. This includes minimizing data
collection, anonymizing data and implementing strong encryption protocols
(Cavoukian, 2011). Moreover, marketers should obtain explicit consent
from consumers before collecting or using their data. Tools like cookie
consent banners and preference centers can empower consumers to control
their data, fostering a sense of agency and trust (Acquisti et al., 2015).
Establishing clear accountability frameworks is also essential for ethical
AI use. Developing a code of ethics for AI use in marketing provides a clear
framework for employees and stakeholders, while regularly monitoring AI
systems and publishing transparency reports can demonstrate a commitment
to ethical practices (Diakopoulos, 2016).
Dicle Yurdakul | 109

5.2. Consumer Awareness & Digital Well-being Strategies


Empowering consumers with knowledge and tools to navigate AI-driven
marketing is critical for fostering trust and ensuring digital well-being. Many
consumers are unaware of how AI is used in marketing or how their data
is being utilized, creating a knowledge gap that leaves them vulnerable to
manipulation. To bridge this gap, companies and policymakers must invest
in consumer education initiatives. Public awareness campaigns can inform
consumers about AI’s role in marketing, the benefits of personalization,
and the risks associated with data misuse (Zuboff, 2019). Additionally,
schools, nonprofits and governments can collaborate to teach digital literacy
skills, helping consumers understand how to protect their privacy and make
informed choices online (Livingstone, 2018).
Giving consumers control over their data and how it is used is another
key aspect of ethical AI marketing. Companies should provide tools that
allow consumers to manage their data preferences, such as opting out of
data collection or deleting their information. Transparency dashboards can
show users how their data is being used, enhancing consumer trust and
accountability (Binns et al., 2018). By empowering consumers with control,
businesses can foster a sense of agency and respect for individual autonomy.
Promoting digital well-being is also essential in the age of AI-driven
marketing. The addictive nature of personalized content can lead to
overconsumption, addiction, and mental health issues. To address this,
marketers should avoid manipulative tactics, such as exploiting cognitive
biases or creating addictive content. Instead, they should focus on providing
value and fostering positive consumer experiences (Fogg, 2009). Platforms
can incorporate features that promote digital well-being, such as screen time
limits, reminders to take breaks and tools to reduce exposure to harmful
content (Anderson & Jiang, 2018). By prioritizing consumer well-being,
businesses can create a healthier and more sustainable digital ecosystem.
Encouraging ethical consumption is another way to drive change.
Consumers can support companies that prioritize ethical AI practices,
rewarding businesses that align with their values. Advocacy groups can
also play a role by raising awareness, organizing campaigns, and holding
businesses accountable for unethical practices. By fostering a culture of
ethical consumption, it is possible to create a market that rewards responsible
behavior and drives positive change.
110 | AI, Addiction, and Consumer Well-Being

5.3. Industry & Policy-Level Solutions for Responsible AI Use


While individual companies and consumers play a crucial role, systemic
change requires collaboration across industries and the implementation
of robust policies. Industry associations can develop guidelines and best
practices for AI in marketing (IAB, 2020). Introducing certification
programs for ethical AI use can incentivize companies to adopt responsible
practices, recognizing organizations that prioritize ethical AI (IEEE, 2019).
Governments also have a critical role to play in ensuring the ethical use
of AI in marketing through legislation and oversight. For example, The
General Data Protection Regulation (GDPR) sets standards for consumer
data privacy and consumer rights. Expanding such laws globally can create a
more consistent framework for ethical AI use (Voigt & Von dem Bussche,
2017). Additionally, governments should consider enacting laws specifically
addressing AI ethics, such as requiring transparency in algorithmic decision-
making or prohibiting discriminatory practices (Cath, 2018). By creating
a robust regulatory environment, governments can hold businesses
accountable and protect consumers from harm.
Collaboration between governments, businesses, and civil society
is essential for addressing the complex challenges of AI ethics. These
collaborations can facilitate the development of ethical AI frameworks,
share best practices, and fund research on AI ethics (Stilgoe et al., 2013).
International organizations, such as the United Nations and the World
Economic Forum, can also play a role by facilitating global cooperation on
AI ethics, ensuring that standards are consistent across borders (Jobin et al.,
2019).
Investing in research and innovation is another key strategy for
addressing ethical challenges and unlocking the full potential of AI in
marketing. Funding research on topics like bias mitigation, explainability
and the societal impact of AI can inform best practices and policy decisions
(Mittelstadt et al., 2016). Encouraging the development of innovative tools
such as privacy-preserving AI techniques (e.g., federated learning) and
ethical AI platforms can drive progress in the field (Yang et al., 2019).
The ethical use of AI in marketing is a business necessity. By embracing
corporate responsibility, empowering consumers, and advocating for
industry-wide solutions, businesses can build trust, foster innovation,
and create a more equitable digital landscape. While challenges remain, a
collaborative and proactive approach can ensure that AI is used to enhance,
rather than exploit, the consumer experience.
Dicle Yurdakul | 111

References
Acquisti, A., Brandimarte, L., & Loewenstein, G. (2015). Privacy and human
behavior in the age of information. Science, 347(6221), 509–514.
Anderson, M., & Jiang, J. (2018). Teens, social media & technology 2018. Pew
Research Center. Retrieved from [Link]
Baumeister, R. F., Vohs, K. D., DeWall, C. N., & Zhang, L. (2007). How
emotion shapes behavior: Feedback, anticipation, and reflection, rather
than direct causation. Personality and Social Psychology Review, 11(2),
167-203.
Bellamy, R. K. E., Dey, K., Hind, M., Hoffman, S. C., Houde, S., Kannan,
K., ... & Zhang, Y. (2018). AI Fairness 360: An extensible toolkit for
detecting and mitigating algorithmic bias. IBM Journal of Research and
Development, 63(4/5), 4:1–4:15.
Berridge, K. C., & Robinson, T. E. (2016). Liking, wanting, and the incen-
tive-sensitization theory of addiction. American Psychologist, 71(8),
670-679.
Bhargava, V. R., & Velasquez, M. (2020). Ethics of the attention economy:
The problem of social media addiction. Business Ethics Quarterly, 30(3),
321-359.
Binns, R., Van Kleek, M., Veale, M., Lyngs, U., Zhao, J., & Shadbolt, N.
(2018). Like trainer, like bot? Inheritance of bias in algorithmic content
moderation. In Proceedings of the International Conference on Social
Informatics (pp. 405–415). Springer.
Bösch, C., Erb, B., Kargl, F., & Kopp, H. (2016). Tales from the dark side:
Privacy dark strategies and privacy dark patterns. Proceedings on Privacy
Enhancing Technologies, 2016(4), 237-254.
Cath, C. (2018). Governing artificial intelligence: Ethical, legal, and technical
opportunities and challenges. Philosophical Transactions of the Royal
Society A: Mathematical, Physical and Engineering Sciences, 376(2133).
Cavoukian, A. (2011). Privacy by design: The 7 foundational principles. Infor-
mation and Privacy Commissioner of Ontario, Canada. Retrieved from
[Link]
De Pelsmacker, P., Geuens, M., & Van den Bergh, J. (2018). Marketing com-
munications: A European perspective (6th ed.). Pearson Education.
Diakopoulos, N. (2016). Accountability in algorithmic decision making. Com-
munications of the ACM, 59(2), 56–62.
Dittmar, H. (2005). Compulsive buying—a growing concern? An examina-
tion of gender, age, and endorsement of materialistic values as predictors.
British Journal of Psychology, 96(4), 467-491.
112 | AI, Addiction, and Consumer Well-Being

Elhai, J. D., Levine, J. C., Dvorak, R. D., & Hall, B. J. (2017). Problematic
smartphone use: A conceptual overview and systematic review of rela-
tions with anxiety and depression psychopathology. Journal of Affective
Disorders, 207, 251-259.
Everitt, B. J., & Robbins, T. W. (2016). Drug addiction: Updating actions to
habits to compulsions ten years on. Annual Review of Psychology, 67,
23-50.
Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V.
& Vayena, E. (2018). AI4People—An ethical framework for a good AI
society: Opportunities, risks, principles, and recommendations. Minds
and Machines, 28(4), 689–707.
Fogg, B. J. (2009). A behavior model for persuasive design. In Proceedings of
the 4th International Conference on Persuasive Technology (pp. 1–7).
ACM.
Goldstein, R. Z., & Volkow, N. D. (2011). Dysfunction of the prefrontal cor-
tex in addiction: Neuroimaging findings and clinical implications. Nature
Reviews Neuroscience, 12(11), 652-669.
Gray, K., Kou, Y., Battles, B., Hoggatt, J., & Toombs, A. L. (2018). The dark
(patterns) side of UX design. In Proceedings of the 2018 CHI Confer-
ence on Human Factors in Computing Systems (pp. 1-14).
Gunning, D., Stefik, M., Choi, J., Miller, T., Stumpf, S., & Yang, G.-Z. (2019).
XAI—Explainable artificial intelligence. Science Robotics, 4(37),
eaay7120.
Himmelstein, D. U., Lawless, R. M., Thorne, D., Foohey, P., & Woolhandler,
S. (2019). Medical bankruptcy: Still common despite the Affordable
Care Act. American Journal of Public Health, 109(3), 431-433.
Holstein, K., Vaughan, J. W., Daumé, H., Dudík, M., & Wallach, H. (2019).
Improving fairness in machine learning systems: What do industry prac-
titioners need? In Proceedings of the 2019 CHI Conference on Human
Factors in Computing Systems (pp. 1–16). ACM.
IEEE. (2019). Ethically aligned design: A vision for prioritizing human well-be-
ing with autonomous and intelligent systems. IEEE Standards Associa-
tion. Retrieved from [Link]
IAB. (2020). AI in marketing: Best practices and guidelines. Interactive Adver-
tising Bureau. Retrieved from [Link]
Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics
guidelines. Nature Machine Intelligence, 1(9), 389–399.
Kalivas, P. W. (2009). The glutamate homeostasis hypothesis of addiction. Na-
ture Reviews Neuroscience, 10(8), 561-572.
Dicle Yurdakul | 113

Kardefelt-Winther, D. (2014). A conceptual and methodological critique of


internet addiction research: Towards a model of compensatory internet
use. Computers in Human Behavior, 31, 351-354.
Keles, B., McCrae, N., & Grealish, A. (2020). A systematic review: The influ-
ence of social media on depression, anxiety and psychological distress
in adolescents. International Journal of Adolescence and Youth, 25(1),
79-93.
Koob, G. F., & Volkow, N. D. (2016). Neurobiology of addiction: A neurocir-
cuitry analysis. The Lancet Psychiatry, 3(8), 760-773.
Kumar, V., Ashraf, A. R., & Nadeem, W. (2024). AI-powered marketing:
What, where, and how?. International Journal of Information Manage-
ment, 77, 102783.
Leyton, M., & Vezina, P. (2013). Striatal ups and downs: Their roles in vulner-
ability to addictions in humans. Neuroscience & Biobehavioral Reviews,
37(9), 1999-2014.
Livingstone, S. (2018). Media literacy and the challenge of new information
and communication technologies. The Communication Review, 7(1),
3–14.
Martin, K. (2018). Ethical implications and accountability of algorithms. Jour-
nal of Business Ethics, 160(4), 835–850.
Mathur, A., Acar, G., Friedman, M. J., Lucherini, E., Mayer, J., Chetty, M., &
Narayanan, A. (2019). Dark patterns at scale: Findings from a crawl of
11K shopping websites. Proceedings of the ACM on Human-Computer
Interaction, 3(CSCW), 1-32.
Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., & Galstyan, A. (2021). A
survey on bias and fairness in machine learning. ACM Computing Sur-
veys, 54(6), 1–35.
Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016).
The ethics of algorithms: Mapping the debate. Big Data & Society, 3(2),
2053951716679679.
Montag, C., Lachmann, B., Herrlich, M., & Zweig, K. (2019). Addictive fea-
tures of social media/messenger platforms and freemium games against
the background of psychological and economic theories. International
Journal of Environmental Research and Public Health, 16(14), 2612.
Müller, A., Mitchell, J. E., & de Zwaan, M. (2015). Compulsive buying. The
American Journal on Addictions, 24(2), 132-137.
Nestler, E. J. (2014). Epigenetic mechanisms of drug addiction. Neuropharma-
cology, 76(Pt B), 259-268.
Newman, B. J., Shah, P., & Lauterbach, E. (2018). Who sees political ads?
A model of exposure via Facebook. Journal of Communication, 68(2),
207-231
114 | AI, Addiction, and Consumer Well-Being

Pariser, E. (2011). The filter bubble: What the Internet is hiding from you.
New York, NY: Penguin Press.
Partnership on AI. (2020). About the Partnership on AI. Partnership on AI.
Retrieved from [Link]
Pasquale, F. (2015). The black box society: The secret algorithms that control
money and information. Cambridge, MA: Harvard University Press.
Pasquale, F. (2015). The black box society: The secret algorithms that control
money and information. Harvard University Press.
Ridgway, N. M., Kukar-Kinney, M., & Monroe, K. B. (2008). An expanded
conceptualization and a new measure of compulsive buying. Journal of
Consumer Research, 35(4), 622-639.
Robinson, T. E., & Berridge, K. C. (1993). The neural basis of drug craving:
An incentive-sensitization theory of addiction. Brain Research Reviews,
18(3), 247-291.
Schwartz, B. (2004). The paradox of choice: Why more is less. New York, NY:
HarperCollins.
Shin, M., & Park, E. (2019). How do human motivations and characteristics
affect online shopping intention? Asia Pacific Journal of Marketing and
Logistics, 31(1), 25-41.
Shin, D., & Park, Y. J. (2019). Role of fairness, accountability, and transparency
in algorithmic affordance. Computers in Human Behavior, 98, 277–284.
Stilgoe, J., Owen, R., & Macnaghten, P. (2013). Developing a framework for
responsible innovation. Research Policy, 42(9), 1568–1580.
Twenge, J. M., Martin, G. N., & Spitzberg, B. H. (2018). Trends in U.S. ado-
lescents’ media use, 1976–2016: The rise of digital media, the decline of
TV, and the (near) demise of print. Psychology of Popular Media Cultu-
re, 8(4), 329-345.
Utz, S., Muscanell, N., & Khalid, C. (2019). Snapchat elicits more jealousy than
Facebook: A comparison of Snapchat and Facebook use. Cyberpsycho-
logy, Behavior, and Social Networking, 18(3), 141-146.
Vogel, E. A., Rose, J. P., Roberts, L. R., & Eckles, K. (2014). Social compari-
son, social media, and self-esteem. Psychology of Popular Media Culture,
3(4), 206-222.
Voigt, P., & Von dem Bussche, A. (2017). The EU General Data Protection
Regulation (GDPR): A practical guide. Springer.
Yang, Q., Liu, Y., Chen, T., & Tong, Y. (2019). Federated machine learning:
Concept and applications. ACM Transactions on Intelligent Systems and
Technology, 10(2), 1–19.
Volkow, N. D., & Morales, M. (2015). The brain on drugs: From reward to
addiction. Cell, 162(4), 712-725.
Dicle Yurdakul | 115

Volkow, N. D., Koob, G. F., & McLellan, A. T. (2016). Neurobiologic advan-


ces from the brain disease model of addiction. New England Journal of
Medicine, 374(4), 363-371.
Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human
future at the new frontier of power. PublicAffairs.
Zuckerman, M., & Kuhlman, D. M. (2000). Personality and risk-taking: Com-
mon biosocial factors. Journal of Personality, 68(6), 999-1029.
116 | AI, Addiction, and Consumer Well-Being
Chapter 8

Ethical Dilemmas in AI-Driven Advertising

Serim Paker1

Abstract
Artificial intelligence (AI) has transformed the advertising sector, improving
efficiency, personalizing, and consumer involvement. Big data analytics,
programmatic advertising, and automated decision-making combined in AI-
driven advertising create tailored marketing campaigns with hitherto unheard-
of accuracy. But these advances raise moral issues that create issues about
consumer manipulation, privacy, prejudice, and openness. The capability
of artificial intelligence to employ consumer information to allow hyper-
personalization raises issues about consumer autonomy and data protection
laws. Also, algorithmic bias within AI-powered advertising has the potential
to reinforce social injustices, thus encouraging discriminative measures.
Deepfakes, chatbots, and voice assistants used during advertising are also
crossing moral limits because deceptive measures are employed to manipulate
consumer behavior without open consent. Emphasizing the need to ensure
strong regulations and corporate responsibility, this chapter critiques the moral
issues raised about artificial intelligence-powered advertising. Some of the
core issues like data privacy, fairness within algorithms, and disinformation are
discussed along with some probable solutions like moral AI design principles,
openness regulations, and measures to ensure regulatory compliance. At the
end, the chapter supports a rational strategy that capitalizes on the capability
of artificial intelligence but follows moral principles to encourage customer
trust and eco-friendly advertising measures.

1. Introduction
AI has changed advertising, allowing for hyper-personalization, real-
time targeting, and automated decision-making. While these innovations
improve efficiency and consumer interaction, they raise ethical problems.
Issues such as data privacy, bias, and consumer manipulation need a thorough

1 [Link] Dr., Dokuz Eylül University, Izmir Türkiye, [Link]@[Link],


[Link]

[Link]
117
118 | Ethical Dilemmas in AI-Driven Advertising

evaluation of AI’s involvement in advertising. This chapter investigates these


ethical quandaries, provides real-world examples of AI misuse, and proposes
a paradigm for responsible AI-driven advertising.

2. The Role of AI in Advertising


Artificial intelligence (AI) has transformed the advertising industry
by providing hyper-personalized marketing strategies and real-time data
analysis. AI-powered systems can process massive volumes of consumer
activity data, enabling firms to develop customized adverts that match
individual interests and browsing behaviors. Automation in ad placement
and campaign optimization has improved efficiency, lowering costs while
increasing engagement. AI also improves creative processes by creating
dynamic content, forecasting trends, and tailoring communications to specific
audiences. While AI provides unprecedented opportunity for advertisers, it
also raises ethical questions about privacy, bias, and the manipulation of
customer preferences.

2.1. AI-driven Targeting and Personalization


Artificial Intelligence (AI) is substantially transforming various
sectors, particularly e-commerce and healthcare, through its ability to
deliver targeted and personalized experiences based on vast datasets. In
the e-commerce industry, AI-powered personalization techniques employ
complex algorithms to scan consumer behavior and preferences to present
highly personalized product recommendations and content. This has the
advantage of enhancing customer satisfaction, engagement, and loyalty,
ultimately establishing market trends within the industry (Raji et al., 2024)
emphasize that AI’s ability to generate personalized consumer experiences
not only enhances sales but also brand loyalty through effectively meeting
the individual customer needs.
AI application in healthcare extends the use of conventional diagnostics
and therapy strategies to include personalized medicine and patient care.
AI algorithms analyze genetic and demographic data to ascertain patients
best positioned to benefit from specific interventions, optimizing the
interventions’ efficacy(Pawar et al., 2023; Weerarathna et al., 2023).
Weerarathna et al. (2023) , for instance, refer to the use of AI models to
predict patient responses to chemotherapy to allow personalized treatment
strategies. Furthermore, Kokudeva et al. (2024) also investigate the
use of AI to help determine targets and optimize the treatment protocol
using machine learning, showing AI’s capability to generate personalized
therapeutic strategies.
Serim Paker | 119

Despite the amazing advantages brought about through AI in targeting


and personalization, there are issues that remain, particularly in the shape of
ethics issues and data privacy. As AI marketing continues to evolve further,
the equilibrium between applying consumer information to create profits
and consumer trust hangs in the balance. Gupta et al. (2021) are concerned
about the ethics of AI marketing, citing the necessity for responsibility and
openness from businesses applying AI tools. This attitude supports the need
of ethical frameworks as artificial intelligence is more ingrained in many
fields, not only business and marketing but also including tailored healthcare
plans and drug development.
In addition, the integration of AI technologies has far-reaching
implications for healthcare providers. AI’s ability to provide real-time analysis
using wearable devices and point-of-care tests provides healthcare providers
the ability to deliver better patient outcomes along with system efficiency
improvements (Yammouri & Lahcen, 2024). Such innovation is moving
toward the future where real-time analysis means timely interventions
further individualizing healthcare delivery. However, the technologies also
raise the need to ensure robust data handling practices to safeguard patient
privacy while making the best use of the data (Wasilewski et al., 2024).
In summary, overall, the application of AI to targeting and personalization
is revolutionizing industries through higher customer involvement and
patient care optimization. But the intersection of technology, ethics, and
consumer privacy has to be balanced carefully to ensure that the full benefit
of AI can be achieved while safeguarding consumer interests.

2.2. Advertising with Big Data Analytics


Particularly under the cover of programmatic advertising and big
data analysis, the junction of artificial intelligence (AI) and targeting and
personalizing has been much studied. Programmatic advertising depends
on artificial intelligence algorithms and vast amounts of data to provide
customized advertising that will help to make marketing effective and
unique. AI-powered platforms scan consumer behavior, preferences, and
demographic data to automate the buying and placement of the advertising
in real-time to generate highly targeted marketing strategies that enhance
the user experience and conversion rate. According to Holloway (2024), the
use of AI and big data analysis under marketing campaigns has the potential
to significantly enhance customer satisfaction through personalized offers
that appeal to individual customers to generate lasting loyalty.
120 | Ethical Dilemmas in AI-Driven Advertising

Big data analytics plays a critical role in the effectiveness of programmatic


advertising. Data-driven insights allow marketers to craft compelling
campaigns that are precisely targeted based on consumer behavior analysis.
The utilization of personal data gathered from various sources, including
Internet of Things (IoT) devices, social media platforms, and online
transactions, enables marketers to deliver advertisements tailored to the
needs and preferences of their audience (Khrais, 2020; Oh et al., 2019). By
effectively combining AI’s capabilities with extensive data sources, marketers
can maximize their outreach and engagement while minimizing advertising
wastage.
But the widespread use of personal information raises privacy-related
ethics issues and challenges. As Oh et al. (2019) further clarify, brokers
accumulate large amounts of personal information, balancing the need to
provide personalized marketing with the need to protect consumer privacy.
Overcoming these challenges involves establishing an open framework that
governs the collection and use of personal information to protect consumer
trust and ensure regulatory compliance. The new landscape demands that
businesses employ robust data handling practices to ensure the ethical use
of AI and consumer information through programmatic advertising. Ethical
issues must be top priority, as evidenced through various research pieces
regarding the balance between privacy and personalization (Park et al.,
2023).
Moreover, AI-driven personalization extends beyond mere advertisement
targeting to impact consumer behavior on a broader scale. Studies suggest
that personalized promotions, such as mobile coupons, can enhance
customer engagement and influence purchasing decisions positively (An et
al., 2021; Huang et al., 2023). As Oh et al. Oh et al. (2019) elaborate,
data brokers collect vast amounts of personal information, creating a tension
between the demand for personalized marketing and the necessity to
protect consumer privacy. Addressing these concerns involves establishing
a transparent framework that governs the collection and usage of personal
data to maintain consumer trust and compliance with regulatory standards.
The evolving landscape necessitates that businesses employ robust data
management practices, ensuring ethical use of AI and consumer data in
programmatic advertising. Ethical considerations must be at the forefront,
as underscored by various studies on the balance between personalization
and privacy (Park et al., 2023).
In addition to this, AI-powered personalization breaks the limits of
mere advertising targeting to influence consumer behavior on a greater
Serim Paker | 121

scale. Studies prove that personalized offers such as mobile coupons can
promote higher customer engagement and influence purchasing behavior
positively(An et al., 2021; Huang et al., 2023). The competitive advantage
gained by tailoring marketing messages based on consumer profiles aligns
marketing strategies with individual consumer preferences, ultimately driving
higher conversion rates and more effective brand interactions (Bhuiyan,
2024; Sodiya et al., 2024).
To summarize, the confluence of artificial intelligence-driven
personalization and programmatic advertising with big data analytics gives
enormous opportunity for marketers to effectively improve their campaigns.
Enterprises have the ability to utilize these technologies in order to provide
individualized experiences while simultaneously navigating the essential
problems of ethical data usage and privacy protection. This will provide a
sustainable and responsible approach to marketing in the digital era.

2.3. AI-Enhanced Targeting and Personalization, Programmatic


Advertising, AI-Generated Content, Chatbots, and Voice
Assistants
Particularly in the areas of personalizing through targeting, programmatic
advertising, AI-generated content, chatbots, and voice assistants, the
integration of artificial intelligence (AI) technologies has revolutionized many
industries. AI combined with big data analytics offers effective marketing
solutions and improves user interactions on several digital platforms.

2.3.1. AI-driven Targeting and Personalization with Programmatic


Advertising
AI-driven targeting has reshaped marketing strategies, particularly
programmatic advertising, which uses AI algorithms to automate ad buying
and placement in real time. This method enables businesses to analyze
consumer data and behaviors, resulting in targeted advertisements that
boost user engagement and conversion rates. Big data analytics improves
these capabilities by facilitating the extraction of relevant consumer insights,
allowing marketers to tailor their strategies more effectively based on
identified preferences and purchasing patterns. However, this reliance on
personal data requires strong ethical considerations, particularly concerning
privacy, data security, and consumer trust (Kokudeva et al., 2024; Pawar et
al., 2023; Weerarathna et al., 2023).
122 | Ethical Dilemmas in AI-Driven Advertising

2.3.2. AI Content Generation and Its Consequences


AI-generated content (AIGC) has gained extensive interest based on the
prospect of automating the production of content using various forms like
text, imagery, and video. Research indicates that the quality of AI-generated
content has a direct connection to the extent to which it will be used and
accepted, especially in the learning sector where it has proven to be linked
to greater learning tool satisfaction amongst learners (Holloway, 2024;
Khrais, 2020). Research indicates that AI has the capability to produce
new and personalized content that responds to certain user requests, further
cementing the use of AI in personalized marketing strategies. The perception
of AI-generated content is however complex; users are highly cautious about
AI-generated content that has proven to be lower compared to human-
generated content (Altay & Gilardi, 2024; Zhang & Gosline, 2023). This
makes users avoid using AI tools to generate content despite the fact that AI
tools are highly efficient. (Altay & Gilardi, 2024; Zhang & Gosline, 2023).
Detecting AI-generated content poses additional challenges, particularly
in academic and professional writing contexts where integrity is paramount.
Current studies demonstrate that while AI content detectors can identify
machine-generated text with reasonable accuracy, they frequently misclassify
human-generated contenas AI, raising concerns about their reliability in
educational assessments and publishing (Elkhatat et al., 2023; Yadav &
Rathore, 2023). This misclassification highlights the need for more nuanced
detection tools capable of differentiating between human and AI-generated
texts, especially in contexts involving mixed authorship (Howard (Howard
et al., 2024).

2.3.3. Voice Assistants and Chatbots in Consumer Interactions


Chatbots and voice assistants are core components of AI-powered
personalization. Chatbots communicate with users through textual
conversations, often within the context of a service-related scenario, whereas
voice assistants employ voice recognition mechanisms to create a more
naturalistic user experience(Khedekar et al., 2023; Sezgin et al., 2020). These
two tools have been used within various industries, including education
and healthcare, where they facilitate efficient communication and automate
operations (Terzopoulos & Satratzemi, 2020). Research has shown that
adults and children alike are attracted to the ease of use of the AI tools,
which are capable of performing functions from answering questions to
managing routine operations (Terzopoulos & Satratzemi, 2020; Khedekar
et al., 2023). The increased usage of the tools signifies a shift toward more
Serim Paker | 123

interactive and reactive digital experiences that are personalized to the


individual needs of users.
In summary, overall, the convergence of programmatic advertising,
AI-powered personalization, voice assistants, AIGC, and chatbots has the
transformative capability to benefit both users and businesses. That being
said, there are certain issues surrounding ethics, social attitudes, and content
identification that need to be addressed as these technologies mature
further. Navigating through the complexities will be essential to unlock
the full benefits that AI has to offer to deliver higher user satisfaction and
engagement.

2.4. AI in Campaign Management and Budget Optimization


Artificial intelligence (AI) is being increasingly implemented in
marketing, which is causing a number of aspects of campaign management
and budget optimization to undergo ongoing transformations. A significant
contribution to the improvement of operational efficiency, personalization,
and the overall impact of marketing strategies has been made by the synergy
that exists between artificial intelligence and marketing activities.

2.4.1. AI in Campaign Management


AI technologies have demonstrated significant potential in revolutionizing
campaign management by optimizing resource allocation and enhancing
strategic decision-making processes. AI algorithms facilitate real-time
analysis and forecasting, allowing marketers to adjust campaign parameters
and allocate budgets efficiently in response to changing market conditions
(Egorenkov, 2022). The integration of AI not only streamlines campaign
execution but also improves the accuracy of marketing decisions, driving
better engagement and enhancing customer experiences (Lyndyuk et al.,
2024b). For instance, Țîrcovnicu and Haţegan (2023) discuss how AI-
driven data analytics refine customer interactions, fostering a more effective
marketing environment across various sectors, including retail.
The effectiveness of AI in these domains stems from its ability to process
vast amounts of data and uncover insights that would be challenging to
identify manually. Heins (2022) and Arbaiza et al. (2024) emphasize that
AI can enhance campaign personalization by analyzing consumer behavior
patterns, enabling advertisers to deliver tailored messaging that resonates
with specific audience segments. This capability not only enhances user
engagement but also optimizes advertising spend, leading to improved
return on investment (ROI) (Ledro et al., 2022).
124 | Ethical Dilemmas in AI-Driven Advertising

2.4.2. Budget Optimization through AI


AI-driven budget optimization is another pivotal aspect that contributes
to the effectiveness of marketing campaigns. Businesses can utilize AI tools
to simulate different budget scenarios, allowing for data-driven decisions
on where to allocate resources for maximum impact (Egorenkov, 2022).
The predictive capabilities of AI can help organizations forecast campaign
performance based on historical data, leading to smarter financial decisions
that align with strategic marketing goals (Noranee & Othman, 2023). The
ability to forecast returns on marketing investments allows brands to allocate
budgets dynamically, ensuring funds are utilized in the most effective areas
of campaign execution.
Research shows that businesses leveraging AI for campaign management
not only maintain a competitive edge but also achieve higher levels of
automation and efficiency (Zancan et al., 2023). Organizations can automate
routine marketing processes, freeing up resources and enabling teams to
focus on strategic initiatives (Arbaiza et al., 2024). The insights garnered
from AI can inform everything from media buying strategies to content
creation, enhancing the precision of marketing actions and reducing wasted
spend in advertising.

2.4.3. Ethical Considerations in AI-Driven Campaigns


Even if artificial intelligence brings benefits for budget control and
campaign management, ethical issues have to be resolved to guarantee
appropriate AI application in marketing. Using personal data for targeted
advertising asks issues about customer privacy and consent, hence open
data policies are necessary to develop confidence with consumers (Adebayo,
2024). Organizations have to create thorough ethical rules to control the
use of artificial intelligence in marketing plans and maximize its advantages
without endangering consumer privacy or confidence, as Chang and Ke
(2023) emphasize.
Finally, the way artificial intelligence helps with budget control and
campaign management greatly improves the capacity of marketing teams
in the current environment of competitiveness. AI technologies are
becoming essential tools for marketers looking for accuracy, efficiency,
and ethical standards in their campaigns since they can analyze enormous
datasets, forecast results, and automate procedures. To completely realize its
transforming power in marketing, constant research and practice in artificial
intelligence must negotiate data security and ethical consequences.
Serim Paker | 125

3. Ethical Dilemmas in AI-Driven Advertising


In the rapidly evolving landscape of AI-driven advertising, ethical
dilemmas have emerged as a subject of considerable concern among scholars
and practitioners alike. The integration of artificial intelligence in advertising
methods has led to significant advantages in personalization and efficiency,
but it also raises critical ethical issues, particularly surrounding consumer
privacy, algorithmic bias, and transparency. Camilleri (2024) implies, in
their research, that all those who are involved in the research, development
and maintenance of AI systems, have social and ethical responsibilities to
bear toward their consumers as well as to other stakeholders in society.
One prominent ethical issue in AI-driven advertising is the lack of
transparency regarding how AI algorithms function and make decisions.
Sometimes unaware that they are interacting with artificial intelligence
systems, consumers wonder about their autonomy and the possibility
of manipulation without informed permission. Many people may not
understand the consequences of being targeted by AI-driven adverts,
hence mistrust between customers and companies can result. (Kumar &
Suthar, 2024). Moreover, algorithmic bias raises another alarming aspect
since artificial intelligence systems can reinforce current society prejudices
seen in their training data. Such prejudices can result in biased advertising
methods, therefore supporting rather than questioning preconceptions. (N.
Singh, 2023; Ziakis & Vlachopoulou, 2023). Brands must take a proactive
approach in managing and auditing these algorithms to ensure fairness and
prevent the perpetuation of harmful biases. (N. Singh, 2023).
Another critical aspect concerns consumer data privacy. The use of
vast datasets allows AI algorithms to tailor advertisements to specific
consumer behaviors and preferences, yet it raises pressing questions about
data ownership and consent. Ethical marketing frameworks must navigate
the tension between maximizing personalized consumer experiences
and respecting individual privacy rights. As AI technologies scale, the
implications for data protection and ethical governance become increasingly
complex (Camilleri, 2023; Sharma, 2023). Business practices need to
prioritize transparency and accountability, ensuring that consumers can
make informed decisions about their data.
Moreover, how AI depicts women in advertising presents gender portrayal
and societal norm-related moral issues. The employment of female-presenting
chatbots and advertising characters to generate advertising personas
presents issues about the reinforcement of gender-related stereotypes. The
employment of AI has the capability to encourage diversity but must be
126 | Ethical Dilemmas in AI-Driven Advertising

employed prudently to avoid the reinforcement of gender-related stereotypes


within advertising strategies (Greguš & Škvareninová, 2023; Kriaučiūnaitė-
Lazauskienė, 2023). The responsibility lies with the advertisers to ensure
that AI tools are employed to encourage inclusivity rather than exclusivity
(Kriaučiūnaitė-Lazauskienė, 2023).
Overall, AI advertising has the potential to revolutionize marketing
strategies through enhanced personalization and efficiency but has complex
ethical challenges. These include defending consumer privacy, addressing
algorithmic bias, and ensuring gender equity in representations. Structuring
AI marketing strategies around ethical principles is crucial for maintaining
consumer trust and achieving sustainable advertising practices that reflect
societal values and norms.

3.1. Consumer Manipulation and Persuasion


Within the field of consumer behavior, the expression “Consumer
Manipulation and Persuasion” captures the tactics that marketers deploy
in order to influence the decisions that consumers make regarding their
purchases. There are two essential ideas that come to light under this
framework: “Exploiting Consumer Vulnerabilities” and “AI-Based Subconscious
Persuasion Techniques”.

3.1.1. Exploiting Consumer Vulnerabilities


Marketers often capitalize on consumer vulnerabilities, which can
range from emotional states to cognitive biases. Vulnerable populations,
including those experiencing stress, low self-esteem, or uncertainty, may be
particularly susceptible to persuasive techniques. For instance, advertising
can employ emotional appeals such as fear or guilt to prompt consumers to
make purchasing decisions that they are otherwise unwilling to undertake
(Hibbert et al., 2007). The effectiveness of such tactics relies upon consumers’
knowledge regarding persuasion tactics. If consumers are conscious that
an attempt to manipulate them has been made, then they are likely to be
defensive, perhaps diffusing the emotional appeal meant to be created
through the advertisement (Alenazi, 2015). Successful manipulation then
tends to balance the presentation of emotional appeals and the consumer
defenses based upon persuasion knowledge—the set of knowledge regarding
marketing tactics(Kirmani & Zhu, 2007).
Moreover, AI technologies have further changed the means through
which consumer susceptibilities can be exploited. Machine learning
algorithms scan consumer data to find emotional triggers and susceptibilities
Serim Paker | 127

to allow marketers to produce advertising that targets the emotional states


and susceptibilities of specific consumers effectively(Hacker, 2021). This
targeting has an ethical issue because the practice has the possibility to create
an exploitative situation where consumers are directed toward making specific
choices through repeated reinforcement of susceptibilities unbeknownst to
them. Hence, consumer vulnerability exploitation has a two-edged sword
character: it can be employed to promote sales but has to be handled using
ethics to avoid manipulation.

3.1.2. AI-Based Subconscious Persuasion Techniques


AI-based subconscious persuasion techniques represent an innovative
intersection of technology and psychology that facilitates subtle influence
on consumer behavior. Marketers are increasingly deploying AI to create
environments where consumers are subtly guided toward specific products
without their conscious awareness. This approach often involves employing
persuasive cues that resonate unconsciously with consumers, thereby
bypassing their direct defenses (Isaac & Grayson, 2019). For example,
advertisements might utilize color psychology or other subliminal techniques
to provoke desired responses, influencing perceptions and purchasing
behavior at a subconscious level (Lim et al., 2020).
One fascinating aspect of AI-driven persuasion is its capacity to adapt
continuously based on real-time consumer behavior and interactions.
Algorithms can fine-tune advertising strategies by learning from consumer
reactions, optimizing the presentation of messages to elicit desired
subconscious responses (Hacker, 2021). However, this raises profound
ethical questions concerning informed consent and the potential for
manipulation. Consumers may not be aware of how their preferences are
shaped by AI, creating what some researchers term a “manipulation loop,”
where the technology’s influence perpetuates consumer vulnerabilities
without transparency (Ryu, 2024).
Furthermore, the application of persuasion knowledge becomes crucial
in understanding these subconscious techniques. Consumers equipped
with high levels of persuasion knowledge may become more vigilant
against AI manipulations, potentially leading to a backlash against brands
perceived to exploit such technologies for manipulation. Conversely, lower
levels of persuasion knowledge may leave consumers more susceptible to
subconscious influences, illustrating a critical area for further research and
ethical debate (Kirmani & Zhu, 2007).
128 | Ethical Dilemmas in AI-Driven Advertising

In conclusion, “Exploiting Consumer Vulnerabilities” and “AI-Based


Subconscious Persuasion Techniques” highlight the nuanced and sometimes
contentious relationship between marketing strategies and ethical consumer
engagement. As AI continues to evolve, understanding and navigating
these dynamics will be essential for both marketers and consumers,
fostering a marketplace that prioritizes ethical transparency and consumer
empowerment.

3.2. Privacy and Data Protection


In a society that is becoming more and more digital, privacy and data
protection have become top priorities especially with the development
of artificial intelligence (AI) and its uses in many different fields. The
vast capabilities of AI-driven tracking systems, which enable large data
collecting, are increasing the ethical consequences connected to consumer
privacy. Furthermore, more important than ever are legislative protections
against possible violations resulting from artificial intelligence technology
in advertising: the General Data Protection Regulation (GDPR) and the
California Consumer Privacy Act (CCPA).

3.2.1. AI-Driven Tracking Systems and the Risks of Mass Data


Collection
AI-driven tracking systems have revolutionized how businesses interact
with consumers by allowing unparalleled access to data on user behavior,
preferences, and interactions. Such systems utilize various methods, including
cookies, mobile app tracking, and social media monitoring, to collect vast
amounts of personal data. This mass data collection raises significant privacy
concerns, especially regarding how this data is stored, analyzed, and utilized
without obtaining informed consent from consumers (Arbaiza et al., 2024;
Xu et al., 2024).
The risks associated with these practices include potential breaches
of data privacy, unauthorized usage of personal information, and even
identity theft. When organizations gather substantial troves of data using
AI, they may inadvertently expose sensitive information, making it a target
for cybercriminals. As noted by Huda Huda et al. (2024), the regulatory
landscape still struggles to keep pace with technological advancements,
creating a gap where consumer privacy can be jeopardized and leading
to systemic vulnerabilities in how personal data is safeguarded (Idoko et
al., 2024). Thus, while AI facilitates better advertising and consumer
engagement, it also invites ethical and legal dilemmas about the extent and
manner of data collection (Eriksson, 2024).
Serim Paker | 129

3.2.2. GDPR, CCPA, and the Ethical Violations of AI in


Advertising
The GDPR and CCPA represent critical legal frameworks designed to
protect consumer privacy rights. The GDPR, implemented in the European
Union, establishes stringent rules on data processing, requiring organizations
to collect personal data responsibly and transparently (Reddy et al., 2020).
Similarly, the CCPA provides California residents with the right to know
what personal information is collected, allowing them to opt out of data
sharing and enabling them to control the utilization of their data (Jin &
Skiera, 2022).
Despite these regulations, ethical violations often occur in the context
of AI in advertising. The potential for misalignment between consumer
expectations and how organizations utilize AI for targeted advertising
presents a significant challenge. AI systems may exploit loopholes in these
regulations or misinterpret consent, leading to instances where consumers
are unaware of how their data is being manipulated for commercial gain
(Hoxhaj et al., 2023). There is a growing concern that these advertising
practices can lead to ethical violations, where the consumer’s autonomy and
data rights are undermined, highlighting the need for stricter compliance
and accountability measures among organizations deploying AI technologies
(Williamson & Prybutok, 2024).
Furthermore, while health information, sensitive personal data, and
biometrics are particularly vulnerable, organizations often lack robust
mechanisms to safeguard this data effectively, leading to further ethical
and legal challenges (Murdoch, 2021) and raising questions about the
adequacy of current regulatory frameworks in addressing these complexities
comprehensively (Liu et al., 2024). Continuous monitoring and adaptation
of these laws are necessary to align with the rapidly advancing landscape of
AI technology and its impact on consumer privacy rights.
In conclusion, the intersection of AI and data protection is fraught
with challenges that necessitate a careful approach to consumer privacy.
Legal frameworks such as GDPR and CCPA must evolve alongside AI
advancements to ensure they effectively mitigate risks associated with data
collection and enhance ethical compliance in advertising practices.

3.3. Bias and Discrimination


Bias and prejudice within artificial intelligence (AI) provide substantial
ethical dilemmas with far-reaching consequences for people and society
at large. Algorithmic bias, a consequence of defective machine-learning
130 | Ethical Dilemmas in AI-Driven Advertising

methodologies, arises when AI systems unintentionally mirror prevailing


cultural preconceptions embedded in their training data. The systematic
review emphasizes the importance of ongoing research, highlighting the
complex interplay between bias, technological advancements, and societal
impacts. The thorough analysis emphasizes the complexities of bias in AI
algorithms, highlighting the critical importance of addressing these issues
in future developments(Fazil et al., 2023). This section explores two critical
aspects: “Algorithmic Bias and Its Ethical Implications” and “The Dangers
of Over-Personalization Limiting Consumer Options.”

3.3.1. Algorithmic Bias and Its Ethical Implications


Algorithmic bias refers to the systematic and unfair discrimination that
occurs when AI systems produce outcomes that are prejudiced against
certain groups, often based on race, gender, or socioeconomic status. This
bias can originate from various sources, including the datasets used to train
AI models, which may be unrepresentative or reflect historical biases (Fazil
et al., 2024; Min, 2023). For instance, predictive policing algorithms have
been criticized for disproportionately targeting minority communities due
to the biased historical crime data upon which they were trained (Min,
2023). The implications of this bias extend to various sectors, including
hiring practices, loan approvals, and healthcare diagnoses, posing ethical
concerns about fairness and equal treatment (Drage & Mackereth, 2022;
Osasona et al., 2024).
The ethical ramifications of algorithmic bias are extensive. When certain
demographics receive less favorable outcomes due to biased algorithms,
issues of justice and equity are called into question. This undermines
trust in AI technologies and raises concerns regarding accountability—if
an AI system discriminates, who is responsible? Moreover, the continued
implementation of these biased systems perpetuates existing inequalities
and has a cascading effect on public perception of technology (Ferrara,
2023; Sreerama & Krishnamoorthy, 2022). Addressing algorithmic bias
thus necessitates a multifaceted approach that includes not only technical
interventions, but also ethical guidelines and regulatory frameworks aimed
at ensuring equitable treatment for all groups (Fazil et al., 2024; Jobin et
al., 2019).

3.3.2. The Risks of Excessive Personalization Narrowing


Consumer Choices
Excessive personalization, while often touted as a means to enhance
user experience, can inadvertently narrow consumer choices through
Serim Paker | 131

algorithmic filtering. When AI systems are designed to curate content


tailored to individual user preferences, they can lead to echo chambers where
consumers are exposed primarily to information and products that align with
their existing beliefs and desires (Chu et al., 2022). This phenomenon can
constrain the diversity of choices available to consumers, effectively limiting
their exposure to novel ideas or alternatives that do not fit their predefined
profiles (Aladeen, 2023).
The implications of this narrowing effect extend to areas such as
advertising, content consumption, and social media interaction. Research
has shown that as algorithms refine their targeting capabilities, they tend
to reinforce existing consumer behaviors rather than encourage exploration
and diversification (Christanto et al., 2024; Sun et al., 2020). As a result,
consumers may find themselves trapped within a narrow perception of
available options, which can adversely affect their decision-making processes
and overall satisfaction with their experiences (Xie & Huang, 2023).
Moreover, this excessive personalization raises ethical concerns regarding
autonomy and informed consent. Consumers may unknowingly surrender
their agency as algorithms dictate the scope of their choices (Rosales &
Fernández-Ardèvol, 2019). This is exemplified in the realm of targeted
advertising, where data-driven algorithms prioritize immediate sales over
delivering a broader spectrum of relevant alternatives. Ultimately, the risks
associated with excessive personalization necessitate a careful evaluation
of the balance between enhancing user experiences and maintaining the
integrity of consumer choice (Illia et al., 2022).
In conclusion, addressing bias and discrimination in AI involves a two-
pronged exploration of algorithmic bias and the risks posed by excessive
personalization. Both aspects highlight the need for comprehensive ethical
standards and regulatory measures to facilitate the deployment of AI
technologies in a manner that promotes fairness, transparency, and consumer
choice.

3.4. Autonomous Decision-Making in Advertising


Autonomous decision-making in advertising refers to the use of artificial
intelligence (AI) systems that operate independently or with limited human
input to create, manage, and optimize advertising campaigns. AI’s ability
to analyze vast amounts of data and execute complex strategies has led
to its growing use in advertising practices. This discourse focuses on two
primary areas: “AI Surpassing Human Decision-Making in Advertising
132 | Ethical Dilemmas in AI-Driven Advertising

Strategies” and “Ethical Concerns Regarding Consumer Autonomy and


Misinformation”(Lyndyuk et al., 2024a).

3.4.1. AI Surpassing Human Decision-Making in Advertising


Strategies
AI technologies have the potential to surpass human decision-making
capabilities primarily through their proficiency in processing large datasets
at speeds and accuracies unattainable by humans. This computational
superiority allows AI to analyze consumer behavior, preferences, and
engagement patterns, leading to highly targeted advertising tactics. For
instance, AI systems can predict market trends and optimize media spending
in real-time, leading to enhanced advertising effectiveness and cost-efficiency
compared to traditional methods (Kumar & Suthar, 2024; N. Singh, 2023).
As stated by Arbaiza et al. (2024) AI’s ability to handle predictive
analysis significantly improves campaign relevance by customizing content
to individual users’ needs and preferences. These advancements allow
marketers to segment audiences more effectively and deploy personalized
advertisements, maximizing engagement and conversion rates while
continuously learning from performance data (Kumar & Suthar, 2024;
N. Singh, 2023). Furthermore, AI’s capacity to manage multi-channel
campaigns ensures consistent messaging across platforms, an aspect that
can be challenging for human-managed strategies due to the complexity
involved. This shift toward AI-dominated decision-making is transforming
the advertising landscape, where speed and precision may outweigh human
intuition and experience.
However, there are limitations to AI’s effectiveness, particularly
concerning its reliance on historical data. If the training data is biased or
unrepresentative, the resultant advertising strategies could reinforce existing
stereotypes or create misleading narratives (Ziakis & Vlachopoulou, 2023).
Thus, while AI has the potential to enhance decision-making processes, it
is crucial to ensure that the data used for training is comprehensive and free
from bias to avoid undermining the ethical integrity of advertising outcomes.

3.4.2. Ethical Concerns Regarding Consumer Autonomy and


Misinformation
The escalation of autonomous decision-making in advertising raises
numerous ethical concerns, particularly regarding consumer autonomy
and the potential for misinformation. One of the primary issues is that
consumers may not be fully aware of how AI-driven advertisements are
Serim Paker | 133

shaping their decisions. The overwhelming personalization capabilities of


AI can effectively manipulate consumer choices, often steering them toward
products or services that align with commercial interests rather than genuine
consumer need or desire (Ziakis & Vlachopoulou, 2023). As a result,
individuals might experience a diminished sense of agency, as they encounter
a narrowing range of choices predominantly influenced by AI algorithms
(Ziakis & Vlachopoulou, 2023).
Moreover, the utilization of misleading advertisements poses significant
risks of misinformation, particularly when AI systems are programmed
to prioritize clicks and engagement over factual correctness. Research
indicates that inaccurate information in advertisements can lead to
misguided consumer behaviors, ultimately affecting purchase intentions
and undermining informed decision-making (Singh, 2023; Ziakis &
Vlachopoulou, 2023). The propensity for AI-generated advertisements to
propagate misinformation further complicates the landscape, necessitating
robust scrutiny.
To address these ethical concerns, it is imperative to incorporate
transparency and accountability into AI-driven advertising practices.
Businesses must ensure that consumers are aware of how their data is being
used, adopting models that prioritize ethical marketing and empower
consumer autonomy (Kumar & Suthar, 2024; al., 2023). Implementing
explainable AI methods can enhance consumer comprehension of AI-
generated recommendations and advertisements, fostering trust and
improving decision-making processes. Furthermore, practicing stringent
checks on the veracity of advertisement claims can mitigate misinformation
risks and promote ethical standards in advertising (Camilleri, 2023; Sharma,
2023).
In conclusion, while autonomous decision-making in advertising
facilitated by AI offers remarkable advantages in efficiency and effectiveness,
it also necessitates a critical examination of ethical considerations related
to consumer autonomy and the dissemination of accurate information.
Balancing innovation with responsibility will be vital in ensuring that
advertising serves to enhance consumer experiences rather than compromise
them.

4. Cases: Unethical Uses of AI in Advertising


The exploration of unethical uses of artificial intelligence (AI) in
advertising unveils troubling case studies that illustrate serious ethical
dilemmas, privacy infringements, and manipulative practices. This analysis
134 | Ethical Dilemmas in AI-Driven Advertising

primarily delves into four significant cases: the Cambridge Analytica


scandal involving political microtargeting, Facebook’s AI-driven behavioral
advertising mechanisms, the rise of deepfake technology in advertising and
the associated ethical concerns, and the problematic use of AI chatbots that
engage in deceptive marketing practices.

4.1. The Cambridge Analytica case, which came to light in 2018,


is emblematic of the disturbing intersection of data privacy and political
advertising (Corrêa et al., 2023). The company exploited personal data
acquired from millions of Facebook users without their consent to create
detailed psychological profiles and execute tailored political campaigns. This
microtargeting strategy raised profound ethical questions regarding user
consent and data ownership, exposing vulnerabilities within the regulatory
frameworks governing data privacy (Chouaki et al., 2022).
The unethical manipulation of such data facilitated the dissemination
of targeted misinformation and played a significant role in shaping the
political landscape during elections (Ali et al., 2019). Cambridge Analytica’s
practices prompted increasing scrutiny and calls for reform in political
advertising, leading to legislative measures intended to protect users from
similar exploitative strategies in the future (Eriksson, 2024).
Serim Paker | 135

4.2. Facebook’s AI-driven behavioral advertising mechanisms


further complicate the ethical landscape of digital marketing. The platform’s
algorithms optimize advertisement delivery based on user engagement and
preferences inferred from vast quantities of personal data. However, these
practices often lack transparency, leading to issues such as discrimination and
manipulation of user sentiment (Andreou et al., 2019). Research suggests
that while Facebook aims to connect advertisers with relevant users, this
often leads to echo chambers that reinforce existing beliefs and biases,
fostering political polarization (Cotter (Cotter et al., 2021)et al., 2021).
The opaque nature of ad targeting erodes trust among users and raises social
responsibility concerns regarding how advertisers can exploit algorithmically
derived data to shape perceptions and behavior without user awareness (Ali
et al., 2019).

4.3. The advent of deepfake technology has introduced another layer


of ethical complexity in advertising. Deepfakes, which employ AI to create
hyper-realistic yet fictitious representations of individuals, present significant
risks in terms of misinformation and deceptive advertising practices (Pizzi
136 | Ethical Dilemmas in AI-Driven Advertising

et al., 2023). This technology raises ethical concerns about authenticity and
consumer trust as advertisers might use deepfakes to present misleading
narratives or endorsements from individuals without their consent (Wiese et
al., 2020). The potential for deepfakes to fabricate celebrity endorsements or
mislead consumers about product efficacy poses risks not just to individuals
but also undermines the integrity of brands and the advertising industry as
a whole (Pizzi et al., 2021). The deceptive nature of such representations
necessitates strict regulations to address the ramifications of false advertising
and protect consumers from manipulative practices (Kish, 2020). The
earliest example of manipulated multimedia content oc-curred in 1860 when
a portrait of southern politician JohnCalhoun was skillfully manipulated
by replacing his head with that of US President for propaganda purposes
and evolved rapidly until present(Masood et al., 2022). The timeline of key
developments can be seen at Figure1.
Figure1. Timeline of Key Developments.

4.4. AI chatbots, which have become increasingly prevalent in


marketing, also present ethical dilemmas through manipulative practices.
These chatbots, often programmed to engage users and facilitate
transactions, may employ strategies to influence consumer behavior without
disclosing their artificial nature. Studies indicate that chatbots can foster
relationships with consumers that blur the lines between human interaction
and AI engagement, often to the latter’s advantage (Arbaiza et al., 2024).
For example, the anthropomorphism of chatbots—giving them human-
like traits—can lead consumers to lower their defenses, making them more
susceptible to marketing tactics that might otherwise be viewed skeptically
(Pizzi et al., 2023). This degree of manipulation necessitates a balanced
Serim Paker | 137

approach that embraces technological advancement while upholding ethical


standards in consumer interactions.
In conclusion, the unethical uses of AI in advertising highlight significant
challenges regarding privacy, transparency, and consumer protection. Each
case study demonstrates the necessity for frameworks that ensure ethical
practices in the use of AI technologies in marketing. The convergence of
technological innovation and ethical responsibility forms the basis for re-
evaluating advertising strategies, emphasizing the need for regulatory
oversight that protects consumers and fosters trust in digital marketing
landscapes. These illuminating case studies reflect critical issues that demand
scholarly attention and policy intervention to safeguard public interests in
the evolving world of AI-powered advertising.

5. Ethical AI Advertising Framework: Principles and


Recommendations
The following are some of the recommendations that could be put into
consideration.

5.1. Transparency & Explainability


As advertising increasingly incorporates AI-driven techniques,
transparency becomes paramount. Consumers should be empowered to
understand how algorithms influence the ads they encounter. Explainability
relates to the ability to uncover how decisions are made in AI systems,
allowing stakeholders to grasp the inner workings of these models (Cary et
al., 2024). Research emphasizes the importance of clear communications
from companies regarding the algorithms they employ, including the data
inputs that drive ad placements (Sreerama & Krishnamoorthy, 2022).
Practitioners must aim for transparency not merely as a compliance measure
but as a fundamental principle of ethical practice in AI advertising (Mehrabi
et al., 2019). By promoting informed consumer consent, organizations can
foster trust and accountability within the advertising ecosystem (Fletcher et
al., 2021).

5.2. Fairness (Equity) & Bias Mitigation


Maintaining fairness is not only a basic need of ethical artificial
intelligence in marketing but also an optional extra feature. The algorithms
themselves and the data used for training are among the several sources
of bias that might mirror society prejudices (Bellamy et al., 2019; Ferrara,
2023). Methods such as fairness metrics enable one to evaluate the equity
138 | Ethical Dilemmas in AI-Driven Advertising

of AI advertising systems by means of which one can spot and minimize


disparities (Albaroudi et al., 2024). Attaching fair results requires effective
strategies including data preprocessing, algorithmic transparency, and
varied representation (Sreerama & Krishnamoorthy, 2022). To address the
complexity of bias in artificial intelligence systems (Bellamy et al., 2019),
an interdisciplinary approach comprising cooperation among technologists,
advertisers, and social scientists is absolutely essential.
Data Privacy and Consumer Control
Using artificial intelligence in advertising magnifies consumer data
privacy issues and calls for strict data protection policies. Consumers have to
keep control over their personal data and make wise decisions on the usage
of it (Zhao, 2024). Regulatory frameworks should mandate organizations
to implement robust data governance practices that respect consumer
preferences, minimizing risks related to data misuse (Cheong et al., 2023;
Tillu et al., 2023b). By means of data for advertising personalization,
the integration of technologies like anonymization and encryption
helps to strengthen privacy measures so ensuring ethical standards are
maintained(Tillu et al., 2023a). Maintaining consumer autonomy will not
only help to increase the legitimacy of advertising markets but also encourage
adherence to legal rules about data privacy (Padmanaban, 2024).

5.3. Regulatory Compliance and Corporate Liability


Organizations have to keep ethical standards going beyond simple
compliance while matching their AI advertising practices with current
legal systems. Companies should aggressively modify their strategies
to fit regulatory needs and build a culture of corporate responsibility as
the terrain of AI control is fast changing (Chin et al., 2023; Tillu et al.,
2023b). This covers following accepted ideas of justice and responsibility
as well as creating an environment that gives ethical issues top priority for
the application of artificial intelligence technologies(Padmanaban, 2024).
Companies can reduce the risks related to algorithmic bias and help to build
public confidence in AI systems by actively participating in compliance and
proving social responsibility (Albaroudi et al., 2024; Mullankandy, 2024).

5.4. Ethical AI Design in Advertising Algorithms


The design of AI algorithms must prioritize ethical considerations from
inception through to deployment. This involves incorporating fairness
metrics and bias mitigation strategies within the algorithmic design process
(C. Singh, 2023). A commitment to ethical AI design encourages the
Serim Paker | 139

development of algorithms that reflect equitable values and serve diverse


audiences without perpetuating discriminatory practices (Adeyelu et al.,
2024). Additionally, the ongoing assessment and improvement of these
systems are essential to adapt to societal changes and emerging ethical
standards (Xu et al., 2022). By embedding ethical principles into the core
framework of AI advertising technologies, organizations can enhance their
competitiveness while contributing positively to the societal impact of
advertising practices (Mehrabi et al., 2019).

6. Final Thoughts and Suggestions for the Future


Through improvements in personalization, automation, and data-driven
decision-making, advertising that is powered by artificial intelligence has
significantly altered the state of the marketing landscape. But the rapid pace
of its development has given rise to ethical concerns regarding the privacy
of consumers, the bias of algorithms, the dissemination of false information,
and transparency. To address these challenges, a balanced approach is
required, one that makes use of the potential of artificial intelligence while
also placing an emphasis on ethical standards and consumer trust.
Regulatory frameworks need to undergo evolution in order to provide
more transparent guidelines for the responsible application of artificial
intelligence in advertising. The incorporation of proactive bias mitigation
strategies, stronger data protection mechanisms, and transparency in
decision-making processes driven by artificial intelligence should be
established as part of marketing practices. To ensure that the deployment
of artificial intelligence is conducted in an ethical manner, it is necessary
for policymakers, developers of AI, and marketers to work together across
disciplines.
The long-term societal impact of artificial intelligence in advertising
should be investigated in future research, with a particular focus on the
impact it has on the autonomy and decision-making of consumers. In order
to shape an advertising ecosystem that is both sustainable and responsible,
it will be essential to develop artificial intelligence systems that are in
accordance with ethical principles while maintaining efficiency.
140 | Ethical Dilemmas in AI-Driven Advertising

References
Adebayo, A. A. (2024). Campaigning in the Age of AI: Ethical Dilemmas and
Practical Solutions for the UK and US. International Journal of Social
Science and Human Research, 07(12). [Link]
v7-i12-65
Adeyelu, O. O., Ugochukwu, C. E., & Shonibare, M. A. (2024). Automating
Financial Regulatory Compliance With Ai: A Review and Application
Scenarios. Finance & Accounting Research Journal, 6(4), 580-601. htt-
ps://[Link]/10.51594/farj.v6i4.1035
Aladeen, H. (2023). Investigating the Impact of Bias in Web Search Algorith-
ms: Implications for Digital Inequality. [Link]
dmkar
Albaroudi, E., Mansouri, T., & Alameer, A. (2024). A Comprehensive Review
of AI Techniques for Addressing Algorithmic Bias in Job Hiring. Ai,
5(1), 383-404. [Link]
Alenazi, S. A. (2015). The Effects of Statement Persuasiveness, Statement
Strength, and Regulatory Focus on Manipulative Intent Inference.
Journal of Business & Economics Research (Jber), 13(1), 43. [Link]
org/10.19030/jber.v13i1.9078
Ali, M., Sapieżyński, P., Korolova, A., Mislove, A., & Rieke, A. (2019). Ad De-
livery Algorithms: The Hidden Arbiters of Political Messaging. https://
[Link]/10.48550/arxiv.1912.04255
Altay, S., & Gilardi, F. (2024). People Are Skeptical of Headlines Labeled as
AI-generated, Even if True or Human-Made, Because They Assume Full
AI Automation. Pnas Nexus, 3(10). [Link]
pgae403
An, Y., Chen, Y., & Li, S. (2021). Intention to Redeem M-Coupons and In-
tention to Disclose Personal Information: Based on Internet Using
Motivation and M-Coupons Delivery Approach. 39-44. [Link]
org/10.1145/3507485.3507492
Andreou, A., Silva, M., Benevenuto, F. c., Goga, O., Loiseau, P., & Mislove,
A. (2019). Measuring the Facebook Advertising Ecosystem. [Link]
org/10.14722/ndss.2019.23280
Arbaiza, F., Arias, J., & Robledo-Dioses, K. (2024). AI-Driven Advertising Ac-
tivity: Perspectives From Peruvian Advertisers. Communication & Society,
273-292. [Link]
Bellamy, R., Dey, K., Hind, M., Hoffman, S. C., Houde, S., Kannan, K., Lo-
hia, P., Martino, J., Mehta, S., Mojsilović, A., Nagar, S., Ramamurthy, K.
N., Richards, J. T., Saha, D., Sattigeri, P., Singh, M., Varshney, K. R., &
Zhang, Y. (2019). AI Fairness 360: An Extensible Toolkit for Detecting
Serim Paker | 141

and Mitigating Algorithmic Bias. Ibm Journal of Research and Develop-


ment, 63(4/5), 4:1-4:15. [Link]
Bhuiyan, M. S. (2024). The Role of AI-Enhanced Personalization in Custo-
mer Experiences. Journal of Computer Science and Technology Studies, 6(1),
162-169. [Link]
Camilleri, M. A. (2023). Artificial Intelligence Governance: Ethical Conside-
rations and Implications for Social Responsibility. Expert Systems, 41(7).
[Link]
Camilleri, M. A. (2024). Artificial intelligence governance: Ethical conside-
rations and implications for social responsibility. Expert Systems, 41(7),
e13406.
Cary, M. P., Bessias, S., McCall, J., Pencina, M., Grady, S. D., Lytle, K. S., &
Economou-Zavlanos, N. (2024). Empowering Nurses to Champion He-
alth Equity &Amp; BE FAIR: Bias Elimination for Fair and Responsible
AI in Healthcare. Journal of Nursing Scholarship, 57(1), 130-139. https://
[Link]/10.1111/jnu.13007
Chang, Y.-L., & Ke, J. (2023). Socially Responsible Artificial Intelligence
Empowered People Analytics: A Novel Framework Towards Sustaina-
bility. Human Resource Development Review, 23(1), 88-120. [Link]
org/10.1177/15344843231200930
Cheong, J., Kuzucu, S., Kalkan, S., & Güneş, H. (2023). Towards Gender Fair-
ness for Mental Health Prediction. 5932-5940. [Link]
ijcai.2023/658
Chin, M. H., Afsarmanesh, N., Bierman, A. S., Chang, C., Colón-Rodríguez,
C. J., Dullabh, P., Duran, D. G., Fair, M., Hernandez-Boussard, T., Hi-
ghtower, M., Jain, A., Jordan, W. B., Konya, S., Moore, R. H., Moore,
T. T., Rodriguez, R., Shaheen, G., Snyder, L. P., Srinivasan, M.,…Oh-
no-Machado, L. (2023). Guiding Principles to Address the Impact of
Algorithm Bias on Racial and Ethnic Disparities in Health and Health
Care. Jama Network Open, 6(12), e2345050. [Link]
jamanetworkopen.2023.45050
Chouaki, S., Bouzenia, I., Goga, O., & Roussillon, B. (2022). Exploring the
Online Micro-Targeting Practices of Small, Medium, and Large Busines-
ses. [Link]
Christanto, H. J., Dewi, C., Sutresno, S. A., & Silalahi, A. D. K. (2024). Anal-
yzing the Use of Chat Generative Pre-Trained Transformer and Artificial
Intelligence. Revue D Intelligence Artificielle, 38(4), 1297-1304. https://
[Link]/10.18280/ria.380423
Chu, C. H., Leslie, K., Shi, J., Nyrup, R., Bianchi, A., Khan, S. S., Rahimi, S.
A., Lyn, A., & Grenier, A. (2022). Ageism and Artificial Intelligence:
142 | Ethical Dilemmas in AI-Driven Advertising

Protocol for a Scoping Review. Jmir Research Protocols, 11(6), e33211.


[Link]
Corrêa, N. K., Galvão, C., Santos, J. W., Pino, C. D., Pinto, E. P., Barbosa, C.,
Massmann, D., Mambrini, R., Galvão, L., Terem, E., & Oliveira, N. d.
(2023). Worldwide AI Ethics: A Review of 200 Guidelines and Recom-
mendations for AI Governance. Patterns, 4(10), 100857. [Link]
g/10.1016/[Link].2023.100857
Cotter, K., Medeiros, M., Pak, C., & Thorson, K. (2021). “Reach the Ri-
ght People”: The Politics of “Interests” in Facebook’s Classificati-
on System for Ad Targeting. Big Data & Society, 8(1). [Link]
org/10.1177/2053951721996046
Drage, E., & Mackereth, K. (2022). Does AI Debias Recruitment? Race, Gen-
der, and AI’s “Eradication of Difference”. Philosophy & Technology, 35(4).
[Link]
Egorenkov, D. (2022). AI-Powered Marketing Automation: Revolutionizing
Campaign Management. International Journal for Multidisciplinary Rese-
arch, 4(2). [Link]
Elkhatat, A. M., Elsaid, K., & Al-Meer, S. (2023). Evaluating the Efficacy of AI
Content Detection Tools in Differentiating Between Human and AI-ge-
nerated Text. International Journal for Educational Integrity, 19(1). htt-
ps://[Link]/10.1007/s40979-023-00140-5
Eriksson, A. (2024). AI-Driven Advertising: Ethical Challenges, Frameworks,
and Future Directions. [Link]
Fazil, A. W., Hakimi, M., & Shahidzay, A. K. (2023). A comprehensive review
of bias in AI algorithms. Nusantara Hasana Journal, 3(8), 1-11.
Fazil, A. W., Hakimi, M., & Shahidzay, A. K. (2024). A Comprehensive Review
of Bias in Ai Algorithms. Nusantara Hasana Journal, 3(8), 1-11. https://
[Link]/10.59003/nhj.v3i8.1052
Ferrara, E. (2023). Fairness and Bias in Artificial Intelligence: A Brief Survey
of Sources, Impacts, and Mitigation Strategies (Preprint). [Link]
org/10.2196/preprints.48399
Fletcher, R., Nakeshimana, A., & Olubeko, O. (2021). Addressing Fairness,
Bias, and Appropriate Use of Artificial Intelligence and Machine Lear-
ning in Global Health. Frontiers in Artificial Intelligence, 3. [Link]
org/10.3389/frai.2020.561802
Greguš, Ľ., & Škvareninová, S. (2023). Opinion Balance of News in the Time
of Mistrust in Media and Democratic Institutions. 133-142. [Link]
org/10.34135/mmidentity-2023-13
Gupta, R., Srivastava, D., Sahu, M., Tiwari, S., Ambasta, R. K., & Kumar, P.
(2021). Artificial Intelligence to Deep Learning: Machine Intelligence
Serim Paker | 143

Approach for Drug Discovery. Molecular Diversity, 25(3), 1315-1360.


[Link]
Hacker, P. (2021). Manipulation by Algorithms. Exploring the Triangle of Un-
fair Commercial Practice, Data Protection, and Privacy Law. European
Law Journal, 29(1-2), 142-175. [Link]
Heins, C. (2022). Artificial Intelligence in Retail – A Systematic Litera-
ture Review. Foresight, 25(2), 264-286. [Link]
fs-10-2021-0210
Hibbert, S., Smith, A., Davies, A., & Ireland, F. (2007). Guilt Appeals: Persua-
sion Knowledge and Charitable Giving. Psychology and Marketing, 24(8),
723-742. [Link]
Holloway, S. (2024). Exploring the Role of Digital Technologies in Enhan-
cing Supply Chain Efficiency and Marketing Effectiveness. [Link]
org/10.20944/preprints202406.1538.v1
Howard, F. M., Li, A., Riffon, M., Garrett-Mayer, E., & Pearson, A. T. (2024).
Characterizing the Increase in Artificial Intelligence Content Detection
in Oncology Scientific Abstracts From 2021 to 2023. Jco Clinical Cancer
Informatics(8). [Link]
Hoxhaj, O., Halilaj, B., & Harizi, A. (2023). Ethical Implications and Human
Rights Violations in the Age of Artificial Intelligence. Balkan Social Scien-
ce Review, 22(22), 153-171. [Link]
Huang, Z., Chen, G., & Zhao, G. (2023). The Effect of Personalization on Consumer
Behaviors. 389-405. [Link]
Huda, M., Awaludin, A., & Siregar, H. K. (2024). Legal Challenges in Regula-
ting Artificial Intelligence: A Comparative Study of Privacy and Data Pro-
tection Laws. Ijsh, 1(2), 116-125. [Link]
Idoko, I. P., Igbede, M. A., Manuel, H. N. N., Adeoye, T. O., Akpa, F. A.,
& Ukaegbu, C. (2024). Big Data and AI in Employment: The Dual
Challenge of Workforce Replacement and Protecting Customer Privacy
in Biometric Data Usage. Global Journal of Engineering and Technology Ad-
vances, 19(2), 089-106. [Link]
Illia, L., Colleoni, E., & Zyglidopoulos, S. C. (2022). Ethical Implications of
Text Generation in the Age of Artificial Intelligence. Business Ethics the
Environment & Responsibility, 32(1), 201-210. [Link]
beer.12479
Isaac, M. S., & Grayson, K. (2019). Priming Skepticism: Unintended Consequ-
ences of One-sided Persuasion Knowledge Access. Psychology and Marke-
ting, 37(3), 466-478. [Link]
Jin, Y., & Skiera, B. (2022). How Do Privacy Laws Impact the Value for Adver-
tisers, Publishers and Users in the Online Advertising Market? A Compa-
144 | Ethical Dilemmas in AI-Driven Advertising

rison of the EU, US and China. Journal of Creating Value, 8(2), 306-327.
[Link]
Jobin, A., Ienca, M., & Vayena, E. (2019). The Global Landscape of AI Ethi-
cs Guidelines. Nature Machine Intelligence, 1(9), 389-399. [Link]
org/10.1038/s42256-019-0088-2
Khedekar, B., Shinde, S., Bagave, S., & Belvalkar, H. (2023). Beacon: The Desk-
top Voice Assistant. International Research Journal of Modernization in En-
gineering Technology and Science. [Link]
Khrais, L. T. (2020). Role of Artificial Intelligence in Shaping Consumer
Demand in E-Commerce. Future Internet, 12(12), 226. [Link]
org/10.3390/fi12120226
Kirmani, A., & Zhu, R. (2007). Vigilant Against Manipulation: The Effect of
Regulatory Focus on the Use of Persuasion Knowledge. Journal of Mar-
keting Research, 44(4), 688-701. [Link]
Kish, K. (2020). Paying Attention: Big Data and Social Advertising as Bar-
riers to Ecological Change. Sustainability, 12(24), 10589. [Link]
org/10.3390/su122410589
Kokudeva, M., Vichev, M., Naseva, E., Miteva, D. G., & Velikova, T. (2024).
Artificial Intelligence as a Tool in Drug Discovery and Development.
World Journal of Experimental Medicine, 14(3). [Link]
wjem.v14.i3.96042
Kriaučiūnaitė-Lazauskienė, G. (2023). Grounded Theory in AI-enhanced
Women’s Image in Advertising. 228-240. [Link]
mmidentity-2023-23
Kumar, D., & Suthar, N. (2024). Ethical and Legal Challenges of AI in Mar-
keting: An Exploration of Solutions. Journal of Information Communi-
cation and Ethics in Society, 22(1), 124-144. [Link]
jices-05-2023-0068
Ledro, C., Nosella, A., & Vinelli, A. (2022). Artificial Intelligence in Custo-
mer Relationship Management: Literature Review and Future Research
Directions. Journal of Business and Industrial Marketing, 37(13), 48-63.
[Link]
Lim, D., Baek, T. H., Yoon, S., & Kim, Y. (2020). Colour Effects in Green
Advertising. International Journal of Consumer Studies, 44(6), 552-562.
[Link]
Liu, Z., Iqbal, U., & Saxena, N. (2024). Opted Out, Yet Tracked: Are Re-
gulations Enough to Protect Your Privacy? Proceedings on Privacy En-
hancing Technologies, 2024(1), 280-299. [Link]
popets-2024-0016
Lyndyuk, A., Havrylyuk, I., Tomashevskii, Y., Khirivskyi, R., & Kohut, M.
(2024a). The impact of artificial intelligence on marketing communica-
Serim Paker | 145

tions: New business opportunities and challenges. Economics of Develop-


ment, 4(23), 60-71.
Lyndyuk, A., Havrylyuk, I., Tomashevskii, Y., Khirivskyi, R., & Kohut, M.
(2024b). The Impact of Artificial Intelligence on Marketing Communi-
cations: New Business Opportunities and Challenges. Economics of Deve-
lopment, 23(4), 60-71. [Link]
Masood, M., Nawaz, M., Malik, K., Javed, A., Irtaza, A., & Malik, H. (2022).
Deepfakes generation and detection: state-of-the-art, open challenges,
countermeasures, and way forward. Applied Intelligence, 53, 1-53. https://
[Link]/10.1007/s10489-022-03766-z
Mehrabi, N., Morstatter, F., Saxena, N. A., Lerman, K., & Galstyan, A.
(2019). A Survey on Bias and Fairness in Machine Learning. [Link]
org/10.48550/arxiv.1908.09635
Min, A. (2023). Artifical Intelligence and Bias: Challenges, Implications, and
Remedies. Journal of Social Research, 2(11), 3808-3817. [Link]
org/10.55324/josr.v2i11.1477
Mullankandy, S. (2024). Transforming Data Into Compliance: Harnessing AI/
ML to Enhance Regulatory Reporting Processes. Jaigs, 3(1), 62-73. htt-
ps://[Link]/10.60087/jaigs.v3i1.66
Murdoch, B. (2021). Privacy and Artificial Intelligence: Challenges for Protec-
ting Health Information in a New Era. BMC Medical Ethics, 22(1). htt-
ps://[Link]/10.1186/s12910-021-00687-3
Noranee, S., & Othman, A. K. (2023). Understanding Consumer Sentiments:
Exploring the Role of Artificial Intelligence in Marketing. Jmm17 Jurnal
Ilmu Ekonomi Dan Manajemen, 10(1), 15-23. [Link]
jmm17.v10i1.8690
Oh, H., Park, S., Lee, G. M., Heo, H., & Choi, J. K. (2019). Personal Data
Trading Scheme for Data Brokers in IoT Data Marketplaces. Ieee Access,
7, 40120-40132. [Link]
Osasona, F., Amoo, O. O., Atadoga, A., Abrahams, T. O., Farayola, O. A., &
Ayinla, B. S. (2024). Reviewing the Ethical Implications of Ai in De-
cision Making Processes. International Journal of Management & Ent-
repreneurship Research, 6(2), 322-335. [Link]
v6i2.773
Padmanaban, H. (2024). Revolutionizing Regulatory Reporting Through AI/
ML: Approaches for Enhanced Compliance and Efficiency. Jaigs, 2(1),
71-90. [Link]
Park, H., Oh, H., & Choi, J. K. (2023). A Profit Maximization Model for Data
Consumers With Data Providers’ Incentives in Personal Data Trading
Market. Data, 9(1), 6. [Link]
146 | Ethical Dilemmas in AI-Driven Advertising

Pawar, V., Patil, A., Tamboli, F. A., Gaikwad, D., Mali, D., & Shinde, A. J.
(2023). Harnessing the Power of AI in Pharmacokinetics and Pharma-
codynamics: A Comprehensive Review. International Journal of Pharma-
ceutical Quality Assurance, 14(02), 426-439. [Link]
ijpqa.14.2.31
Pizzi, G., Scarpi, D., & Pantano, E. (2021). Artificial Intelligence and the New
Forms of Interaction: Who Has the Control When Interacting With
a Chatbot? Journal of Business Research, 129, 878-890. [Link]
g/10.1016/[Link].2020.11.006
Pizzi, G., Vannucci, V., Mazzoli, V., & Donvito, R. (2023). I, Chatbot! The Im-
pact of Anthropomorphism and Gaze Direction on Willingness to Disc-
lose Personal Information and Behavioral Intentions. Psychology and Mar-
keting, 40(7), 1372-1387. [Link]
Raji, M. A., Olodo, H. B., Oke, T. T., Addy, W. A., Ofodile, O. C., & Oyewole, A.
T. (2024). E-Commerce and Consumer Behavior: A Review of AI-powe-
red Personalization and Market Trends. GSC Advanced Research and Re-
views, 18(3), 066-077. [Link]
Reddy, S. V., Reddy, B., Dodda, S., Raparthi, M., & Maruthi, S. (2020). Ethical
Considerations in AI-Enabled Big Data Research: Balancing Innovation
and Privacy. [Link]
Rosales, A., & Fernández-Ardèvol, M. (2019). Structural Ageism in Big Data
Approaches. Nordicom Review, 40(s1), 51-64. [Link]
nor-2019-0013
Ryu, S. (2024). Brand Communications During a Global Crisis: Understanding
Persuasion Intent, Perceived Brand Opportunism and Message Sincerity.
Journal of Product & Brand Management, 33(1), 162-178. [Link]
org/10.1108/jpbm-11-2022-4230
Sezgin, E., Huang, Y., Ramtekkar, U., & Lin, S. (2020). Readiness for Vo-
ice Assistants to Support Healthcare Delivery During a Health Crisis
and Pandemic. NPJ Digital Medicine, 3(1). [Link]
s41746-020-00332-0
Sharma, S. (2023). Ethical Considerations in AI-Based Marketing: Balan-
cing Profit and Consumer Trust. TJJPT, 44(3), 1301-1309. [Link]
org/10.52783/tjjpt.v44.i3.474
Singh, C. (2023). Artificial Intelligence and Deep Learning: Considerations for
Financial Institutions for Compliance With the Regulatory Burden in the
United Kingdom. Journal of Financial Crime, 31(2), 259-266. https://
[Link]/10.1108/jfc-01-2023-0011
Singh, N. (2023). AI-Driven Personalization in eCommerce Advertising. Inter-
national Journal for Research in Applied Science and Engineering Technology,
11(12), 1692-1698. [Link]
Serim Paker | 147

Sodiya, E. O., Amoo, O. O., Umoga, U. J., & Atadoga, A. (2024). AI-dri-
ven Personalization in Web Content Delivery: A Comparative Study
of User Engagement in the USA and the UK. World Journal of Advan-
ced Research and Reviews, 21(2), 887-902. [Link]
wjarr.2024.21.2.0502
Sreerama, J., & Krishnamoorthy, G. (2022). Ethical Considerations in AI Add-
ressing Bias and Fairness in Machine Learning Models. Journal of Know-
ledge Learning and Science Technology Issn 2959-6386 (Online), 1(1), 130-
138. [Link]
Sun, W., Nasraoui, O., & Shafto, P. (2020). Evolution and Impact of Bias in
Human and Machine Learning Algorithm Interaction. Plos One, 15(8),
e0235502. [Link]
Terzopoulos, G., & Satratzemi, M. (2020). Voice Assistants and Smart Speakers
in Everyday Life and in Education. Informatics in Education, 473-490.
[Link]
Tillu, R., Muthusubramanian, M., & Periyasamy, V. (2023a). From Data to
Compliance: The Role of AI/ML in Optimizing Regulatory Repor-
ting Processes. Journal of Knowledge Learning and Science Technology Issn
2959-6386 (Online), 2(3), 381-391. [Link]
n3.p391
Tillu, R., Muthusubramanian, M., & Periyasamy, V. (2023b). Transforming Re-
gulatory Reporting With AI/ML: Strategies for Compliance and Efficien-
cy. Journal of Knowledge Learning and Science Technology Issn 2959-6386
(Online), 2(1), 145-157. [Link]
Țîrcovnicu, G.-I., & Haţegan, C.-D. (2023). Integration of Artificial Intelli-
gence in the Risk Management Process: An Analysis of Opportunities
and Challenges. Journal of Financial Studies, 8(15), 198-214. [Link]
org/10.55654/jfs.2023.8.15.13
Wasilewski, T., Kamysz, W., & Gębicki, J. (2024). AI-Assisted Detection of Bi-
omarkers by Sensors and Biosensors for Early Diagnosis and Monitoring.
Biosensors, 14(7), 356. [Link]
Weerarathna, I. N., Kamble, A. R., & Luharia, A. (2023). Artificial Intelligence
Applications for Biomedical Cancer Research: A Review. Cureus. https://
[Link]/10.7759/cureus.48307
Wiese, M., Martínez-Climent, C., & Botella-Carrubi, D. (2020). A Fra-
mework for Facebook Advertising Effectiveness: A Behavioral Perspec-
tive. Journal of Business Research, 109, 76-87. [Link]
jbusres.2019.11.041
Williamson, S., & Prybutok, V. R. (2024). Balancing Privacy and Progress: A
Review of Privacy Challenges, Systemic Oversight, and Patient Percep-
148 | Ethical Dilemmas in AI-Driven Advertising

tions in AI-Driven Healthcare. Applied Sciences, 14(2), 675. [Link]


org/10.3390/app14020675
Xie, Y., & Huang, Y. (2023). A Novel Personalized Recommendation Model for
Computing Advertising Based on User Acceptance Evaluation. Ieee Ac-
cess, 11, 140636-140645. [Link]
Xu, J., Xiao, Y., Wang, H., Ning, Y., Shenkman, E., Bian, J., & Wang, F.
(2022). Algorithmic Fairness in Computational Medicine. [Link]
g/10.1101/2022.01.16.21267299
Xu, Y., Liu, Y., Wu, J., & Zhan, X. (2024). Privacy by Design in Machine Le-
arning Data Collection: An Experiment on Enhancing User Experien-
ce. Applied and Computational Engineering, 97(1), 64-68. [Link]
org/10.54254/2755-2721/97/20241388
Yadav, K., & Rathore, P. (2023). AI-Generated Content Detectors: Boon or
Bane for Scientific Writing. Indian Journal of Science and Technology,
16(39), 3435-3439. [Link]
Yammouri, G., & Lahcen, A. A. (2024). AI-Reinforced Wearable Sensors and
Intelligent Point-of-Care Tests. Journal of Personalized Medicine, 14(11),
1088. [Link]
Zancan, C., Passador, J. L., & Passador, C. S. (2023). Integrating Python-Based
Artificial Intelligence for Enhanced Management of Inter-Municipal Tou-
rism Consortia: A Technological Approach. Caderno Virtual De Turismo,
23(3), 83. [Link]
Zhang, Y., & Gosline, R. (2023). Human Favoritism, Not AI Aversion: Peop-
le’s Perceptions (And Bias) Toward Generative AI, Human Experts, and
Human–GAI Collaboration in Persuasive Content Generation. Judgment
and Decision Making, 18. [Link]
Zhao, H. (2024). Fair and Optimal Prediction via Post-processing. Ai Magazi-
ne, 45(3), 411-418. [Link]
Ziakis, C., & Vlachopoulou, M. (2023). Artificial Intelligence in Digital Marke-
ting: Insights From a Comprehensive Review. Information, 14(12), 664.
[Link]
Chapter 9

The Erosion of Consumer Autonomy

Canan Yılmaz Uz1


Seda Arslan2

Abstract
Consumer autonomy refers to an individual’s capacity to make decisions
independently, based on their own values, needs, and informed evaluations,
free from external pressures. However, with digitalization, the use of big data,
and marketing strategies driven by algorithms, this autonomy is increasingly
eroding. Although today’s consumers believe they are making conscious
choices, they are, in fact, unbeware manipulated through personalized
advertisements, AI-powered recommendation systems, and neuromarketing
techniques. The erosion of consumer autonomy is not limited to advertising
and marketing strategies but is also supported by algorithmic guidance,
digital ecosystems that encourage constant consumption, and psychological
manipulation tools. This phenomenon weakens consumers’ ability to
make rational decisions, promotes overconsumption, and fosters a sense of
dissatisfaction.
This study aims to highlight the significance of consumer autonomy
by examining its erosion process and its effects on consumer behavior.
Furthermore, it discusses the disruptions in consumers’ decision-making
mechanisms, ethical concerns, and potential violations of consumer rights.
The limited number of systematic studies on consumer autonomy in the
literature emphasises the contribution of this research to the field and
highlights the relevance of the topic within the dynamics of contemporary
consumption.

1 Doç. Dr. İskenderun Teknik Üniversitesi, [Link]@[Link],


[Link]
2 Dr. Öğr. Üyesi, İskenderun Teknik Üniversitesi, [Link]@[Link],
[Link]

[Link]
149
150 | The Erosion of Consumer Autonomy

1. Introduction
In today’s world, with the advancement of technology, the concept of
consumption has undergone a significant transformation. Accordingly,
consumer behavior has started to be shaped by various factors (Ertemel &
Pektaş, 2018). This situation may also reduce consumers’ ability to make
autonomous choices (Sevastianova, 2023). Consumer autonomy refers
to individuals’ ability to access information and make free and informed
choices (Wertenbroch et al., 2020). However, in the contemporary era, this
freedom is increasingly eroded by various marketing strategies, algorithms,
and manipulative consumption practices. The erosion of consumer
autonomy is associated with the increasing presence of factors that hinder
individuals from making independent decisions. In this context, the erosion
of consumer autonomy can also occur through digital technologies, data-
driven advertisements, and personalized marketing activities (Cunningham,
2003).
Before the Industrial Revolution, consumers had to choose from a small
number of products and services. However, with technological advancements,
the diversification of options and the provision of personalized experiences
have enabled individuals to make choices aligned with their lifestyles.
During this period of increasing technological advancements signifies an
era that supports the increase in consumer autonomy. Nevertheless, in the
digital age, consumer autonomy is increasingly challenged, as individuals are
surrounded by manipulative practices that shape their free will. Particularly,
technological advancements have transformed consumer behavior, and
algorithm- and artificial intelligence-based systems now possess the ability to
predict and influence individuals’ preferences. This creates an environment
highly susceptible to manipulation. The guidance of individuals in a setting
where they do not make conscious decisions is not only an ethical concern
but also a legal issue with significant implications. The erosion of consumer
autonomy extends beyond an individual concern, beginning to impact the
societal structure as well. The rapid expansion of the digital world may
deepen social inequalities, and as manipulated consumers become more
vulnerable, the foundations of a democratic consumer culture may be
seriously threatened.
This book chapter aims to comprehensively examine the concept
of consumer autonomy erosion, which has become a significant issue in
consumer behavior in recent years. In the first section, the concept of
autonomy is analyzed in detail, followed by an exploration of consumer
autonomy and its erosion from various theoretical and practical perspectives.
Canan Yılmaz Uz / Seda Arslan | 151

Additionally, solutions to prevent the erosion of consumer autonomy are


proposed, and legal, ethical, and strategic measures in this field are discussed.

2. The Concept of Autonomy


Autonomy is a state that nurtures individuals’ desire to make choices
and their sense of freedom (Bendapudi & Leone, 2003). This concept is
associated with individuals’ ability to make independent decisions. In the
context of consumer behavior, autonomy can be defined as “the ability of
consumers to make decisions and implement them independently, without
external pressures and impositions” (Wertenbroch et al., 2020). Choices
made by consumers with intrinsic motivation and conscious awareness
constitute concrete examples of the autonomy experience, representing
conditions where no constraints exist in the decision-making process, and
free choice prevails (Andre et al., 2018; Aydın & Doğan, 2023).
Theoretically, the concept of autonomy can be linked to the Self-
Determination Theory. According to the Self-Determination Theory (Deci
& Ryan, 1985), humans are inherently predisposed to development, and
their social environment significantly influences this process. Individuals’
developmental trajectories are largely shaped and influenced by their
surrounding environment. The intrinsic motivation for development, when
combined with opportunities provided by environmental factors, forms
the fundamental determinants of an individual’s orientations and decision-
making processes.
Sneddon (2001) categorizes autonomy into ‘shallow autonomy’ and
‘deep autonomy.’ Shallow autonomy refers to an individual’s ability to
freely choose among available options; however, this concept neglects the
cognitive processes underlying an individual’s choices and their connection
to personal identity and values. At this level, individuals may make decisions
based on external preferences but do not necessarily engage in deep
contemplation or questioning of the values, desires, and personal identity
underlying these decisions. Thus, while shallow autonomy offers superficial
freedom of choice, it does not integrate the decision-making process with
the individual’s deeper psychological and philosophical dimensions.
Deep autonomy, in contrast, is more complex and multidimensional.
This form of autonomy is not merely limited to the ability to make choices;
rather, it requires individuals to develop deep intrinsic awareness regarding
the values, goals, and identity that shape their choices, directing their lives
accordingly. Deep autonomy involves a process in which individuals critically
evaluate their beliefs, desires, and values. This process enables individuals to
152 | The Erosion of Consumer Autonomy

act with internal coherence, independent of external influences. Individuals


do not simply make choices; they also question the alignment of these
choices with their personal values and identity. As a result, deep autonomy
transcends surface-level preferences, integrating decision-making with life
goals, the search for meaning, and personal development. This process
entails consciously reflecting on identity, values, and the meaning of life
and incorporating these reflections into daily life practices. While shallow
autonomy is limited to decision-making ability, deep autonomy involves
questioning how one’s choices align with personal identity and values and
assessing the coherence of these choices (Schneider-Kamp & Askegaard,
2020). Shallow autonomy is confined to an individual’s momentary decision-
making ability, where choices are often not directly linked to personal
identity and values. In contrast, deep autonomy requires individuals to move
beyond decision-making and examine how their choices correspond with
their values and long-term goals.
Schneider-Kamp & Askegaard (2020) emphasizes that deep autonomy
does not disappear entirely. Individuals may occasionally make non-
autonomous choices, be subjected to manipulation, experience external
pressures, or make erroneous decisions. However, this does not eliminate
their overall state of autonomy or indicate a loss of deep autonomy. Sneddon
(2001) also asserts that external factors, such as advertising, pose a threat to
deep autonomy. He argues that advertisements, by exerting a manipulative
influence on individuals’ values and choices, make decision-making more
susceptible to external guidance. Shallow autonomy is restricted to an
individual’s ability to make immediate choices, often detached from personal
identity and values. In contrast, deep autonomy requires individuals to not
only make choices but also question the alignment of these choices with
their values and long-term goals.
Sneddon (2011) highlights that deep autonomy is not entirely lost.
Individuals may occasionally make non-autonomous choices, be subjected
to manipulation, external pressures, or make erroneous decisions. However,
this does not eradicate their overall autonomy or mean they have lost deep
autonomy. Sneddon (2011) also notes that deep autonomy is particularly
threatened by external factors such as advertising. External influences,
including advertisements, can weaken individuals’ free will by exerting a
manipulative impact on their values and choices, making their decision-
making process more susceptible to external direction. The application of
deep autonomy involves two fundamental components:
Canan Yılmaz Uz / Seda Arslan | 153

- Evaluation of values: The individual questions the consistency between


their values and their primary desires.
- Assessment of the desirability of values: The individual analyzes the
extent to which their values are desirable or valid.
This process enables individuals to establish a stronger connection with
their identity and values, fostering a deeper sense of self-awareness.
Autonomy is a multidimensional and interdisciplinary concept, examined
from various perspectives in fields such as philosophy, psychology, sociology,
and law. Each discipline analyzes autonomy through its own lens, discussing
different aspects of the concept in diverse contexts. While this diversity allows
for a deeper analysis of autonomy, it may also lead to misunderstandings
when different conceptualizations of autonomy are used interchangeably
across disciplines (Wertenbroch et al., 2020).
Philosophy is one of the disciplines that examines the concept of autonomy
in the most profound manner, focusing on its relationship with free will. Free
will refers to an individual’s capacity to choose or reject a particular action.
From this perspective, autonomy is related to an individual’s ability to make
decisions based on their free will, maintain control over their own life, and
act according to their own values. Key philosophical questions include what
free will is, how it functions, and under what conditions it is valid. These
questions are central to ongoing, unresolved debates concerning the nature
of free will (André et al., 2018).
For instance, Kane (2011) explains free will by arguing that individuals
must have the capacity to “choose otherwise.” This approach emphasizes
not only the existence of choices but also the ability to make conscious and
rational decisions among them. In contrast, Frankfurt (1971) examines
free will from a more psychological perspective, focusing on an individual’s
“second-order desires.” According to Frankfurt, a person’s ability to regulate
their first-order desires (such as physical or impulsive wants) is essential for
autonomous will. This refers to an individual’s capacity to control their own
desires and act according to higher-order goals. This perspective encompasses
not only immediate impulses but also the ability to act in alignment with
long-term values and aspirations.
These philosophical discussions offer significant insights into
understanding consumer behavior. In the modern consumption landscape,
issues such as how individuals perceive their autonomy, the effects of
marketing strategies on these perceptions, and whether consumers can make
fully informed decisions are directly related to these philosophical debates.
154 | The Erosion of Consumer Autonomy

Consumer autonomy should be examined not only in terms of individual


preferences and freedom of will but also within the framework of the social
and economic structures that influence individuals. Therefore, incorporating
philosophical analyses into studies on consumer autonomy can facilitate an
interdisciplinary understanding and contribute to a more comprehensive
exploration of the various dimensions of autonomy.

3. Consumer Autonomy
Consumer autonomy refers to the ability of individuals to make
consumption decisions independently and with minimal external influence.
As Bauman (1988) stated, consumer autonomy does not necessarily imply
strong self-determination or complete independence of individual will;
however, it delineates highly valuable boundaries for consumers. These
boundaries serve to protect consumers from the exploitation of powerful
corporations, misleading advertisements, coercion, and other unfair practices
(Bauman, 1988). Autonomous consumer choice refers to a self-determined
and independent decision-making process whereby an individual makes
purchasing decisions—whether to buy or not buy certain products—based
on their own will. This choice is entirely driven by the individual’s personal
beliefs and desires and is thus genuinely personal (Siipi & Uusitalo, 2008;
Zhu, et al., 2024).
For a consumer’s choices to be autonomous, three fundamental conditions
must be met. First, the consumer must possess competence. Second, the
consumer must have genuine desires and beliefs. Third, the consumer
must have the capacity to apply these beliefs and desires to their choices
(Raikka, 1999; Beauchamp, 2005). Consumer competence refers to having
the necessary psychological and physical capacities for self-determination
and autonomous decision-making. This capacity encompasses the ability to
form beliefs and determine desires (Raikka, 1999; Pietarinen, 1994; Hyun,
2001; Oshana, 1998). The second condition for choice autonomy is that
the consumer’s beliefs and desires must be genuine and authentic to them.
For a consumer’s beliefs and desires to be considered authentic, they must
be free from coercion or constraints. In other words, authentic desires and
beliefs emerge without manipulation or excessive external influence (Hyun,
2001; Beauchamp & Childress, 2001). The third condition for autonomy
in decision-making is that the individual must have the capacity to act upon
their beliefs and desires. A person with this capacity not only holds authentic
beliefs and desires but is also able to make decisions based on them. That
is, the consumer can determine what to choose based on their own beliefs
and desires (Streiffer & Rubel, 2004; Oshana, 1998). The ability to make
Canan Yılmaz Uz / Seda Arslan | 155

a choice requires the existence of multiple alternatives; at the very least, the
individual must believe that alternatives are available. If no alternatives exist,
the individual cannot make a choice. Consequently, if a person is unable
to make a choice, their choices cannot be considered autonomous (Siipi &
Uusitalo, 2011).
Autonomy does not require consumers to be completely shielded from
persuasive marketing strategies. Instead, autonomy focuses on ensuring that
consumers have a fair opportunity to make informed and free decisions when
exposed to persuasive marketing tactics, without feeling coerced, deceived,
or misled. This entails that consumers should be able to make choices based
on their own desires and needs, free from external pressures or misleading
influences (Anker, 2020). In its classical sense, autonomy refers to an
individual’s capacity for self-governance, independent of external control or
manipulation, emphasizing independence. The marketing literature plays a
crucial role in conceptualizing consumer autonomy. However, as observed
in European Union regulations, the concept of autonomy has not been
adequately addressed in marketing ethics. Literature reviews on the subject
reveal that autonomy is generally defined as a concept encompassing control,
will, desire, choice, and self-reflection. Consumers are often not sufficiently
motivated to actively seek or engage with important product information
(Bakos et al., 2014). This presents a significant issue: across the European
Union, 24% of consumers never read contract terms and conditions, while
36% only partially read them (Eurobarometer, 2011). A recent study found
that out of 1,000 retail software purchasers, only one or two thoroughly
reviewed the licensing agreements, and most of those who did only read a
small portion (Bakos et al., 2014).
To understand the extent of information deficiency in consumer
decision-making processes, one must consider the critical point that terms
and conditions contain legally mandated information that sellers are
required to provide to consumers. However, the ineffectiveness of such
information stems from businesses overwhelming consumers with excessive
data, rendering the information unprocessable. This phenomenon, referred
to as “data dumping,” significantly weakens consumers’ ability to make
autonomous decisions when faced with an overload of textual information
(Zhu et al., 2024). Decision uncertainty points to fundamental ambiguities
affecting an individual’s autonomy, which complicate independent and
informed decision-making processes (Schneider-Kamp & Askegaard, 2020).
Consumer autonomy is influenced not only by the decisions consumers
make based on their own preferences and capacities but also by businesses’
156 | The Erosion of Consumer Autonomy

marketing communication efforts and the actions and regulations of other


market actors (Hyman et al., 2023). Moreover, consumer autonomy is
directly linked to both internal (e.g., cognitive and volitional capacities) and
external factors (e.g., access to information and epistemic market conditions
such as consumer rights). In this context, decision-making processes
that impact consumer autonomy are shaped by the interaction between
individual capacity and environmental conditions. More importantly,
consumer autonomy is considered a critical prerequisite for legitimizing
marketing as a social system on an ethical foundation in capitalist societies
(Fassiaux, 2023). In light of existing research in marketing theory, Anker
(2024) examines consumer autonomy within the framework of internal and
external conditions. According to Anker, when consumers have access to the
information they need and possess the capacity for critical thinking aligned
with their values and goals, their level of autonomy increases. However, it is
widely accepted that consumer autonomy is also significantly influenced by
cognitive limitations and social contexts.
In the context of consumer autonomy, the need for autonomy pertains to
individuals’ sense of being able to make their own decisions independently,
enabling them to take an autonomous role in consumption decisions
(Gümüş & Gegez, 2017). Protecting consumer autonomy requires careful
consideration of the distinction between autonomy and the preservation of
informed choices. Autonomy does not imply completely isolating consumers
from marketing influences; rather, it seeks to ensure that individuals exposed
to marketing messages can make conscious, freely determined decisions
without being manipulated, misled, or deprived of crucial information. Anker
(2020) defines the protection of consumer autonomy as the establishment
of an environment where consumers can make informed and free choices. In
this regard, preventing manipulative or coercive strategies and ensuring that
consumers receive transparent and accurate information are of paramount
importance.

4. The Erosion of Consumer Autonomy


Discussions on the erosion of consumer autonomy, where consumers’
freedom of choice is subject to various external interventions and restrictions,
are increasingly gaining attention (Hyman et al., 2023). The erosion of
autonomy pertains to the growing influence of external factors (such as
marketing, social pressures, and digital algorithms) on consumer decisions. A
consumer whose autonomy is limited finds their choices significantly shaped
by external interventions, or their ability to negotiate or act in accordance
Canan Yılmaz Uz / Seda Arslan | 157

with their desires and beliefs is hindered by mental or physical barriers (Siipi
& Uusitalo, 2011).
The ability of consumers to make autonomous decisions enables the
legitimacy of marketing as a social practice within capitalist economies
(Cluley, 2019; Villarán, 2017). Marketing can be defined as a social
system shaped by the exchange of goods or services between providers and
consumers (Lusch & Watts, 2018; Lüedicke, 2006; Anderson et al., 1999;
Bagozzi, 1975; Houston & Gassenheimer, 1987). The ethical validity of
these exchanges is ensured when all parties consciously and voluntarily
accept the exchange (Brenkert, 2008; Caruana et al., 2008; Nixon &
Gabriel, 2016). However, many consumers report encountering incomplete
and misleading information, which weakens their ability to make informed
decisions and act autonomously (EC, 2015; Eurobarometer, 2011). The
lack of consumer information is not limited to situations where fairness is
absent; it can also be observed even where proper regulations exist. This issue
arises due to consumers’ difficulty in accessing information, the complexity
of product and service structures, or the influence of marketing strategies.
Some external factors have been identified in the EU “Unfair Commercial
Practices Directive” (EUR-LEX, 2005) as having the potential to threaten
personal autonomy through elements such as harassment and coercion. This
raises a crucial question: what are the distinctions between external influences
that threaten autonomy and those that align with it? In this context, the
debate on autonomy gains significant importance in terms of marketing
ethics. For instance, impulsive buying is a frequently encountered consumer
behavior that illustrates the conflict between autonomy and marketing
(Chan et al., 2017; Moser, 2018; Strack et al., 2006). Previous studies
have supported the strong relationship between impulsive buying and the
purchase of undesirable products, often leading to consumer regret (Hoch
& Loewenstein, 1991; Lee et al., 2015; Wood, 1998). This finding can be
considered a significant indicator of violations of consumer autonomy.
The concept of consumer autonomy offers a perspective that examines
the impact of marketing methods and practices on individuals’ independent
decision-making processes and the extent to which they align with these
processes (e.g., Anker et al., 2010; Arrington, 1982; Barrett, 2000;
Bishop, 2000; Crisp, 1987; Cunningham, 2003; Raley, 2006; Sneddon,
2001; Villarán, 2017). Factors contributing to the erosion of autonomy
include targeted advertisements, social media algorithms, pricing strategies,
psychological interactions, and recommendation systems. These factors can
hinder consumers’ ability to make informed decisions. Research on consumer
psychology suggests that impulsive buying has a psychological explanation
158 | The Erosion of Consumer Autonomy

within the context of self-regulation and self-control deficiencies (Chen &


Wang, 2016; Verplanken & Sato, 2011; Yi & Baumgartner, 2011). In this
regard, impulsive buying emerges as a prevalent consumer behavior that
significantly weakens autonomy due to marketing strategies (Baumeister,
2002). Consequently, impulsive purchasing behaviors result from external
marketing factors manipulating consumer decisions and restricting their free
will.
Persuasive marketing strategies play a complex role in the erosion
of consumer autonomy. These strategies do not always pose a threat to
autonomy; on the contrary, they can serve as an essential tool in constructing
brand identities and symbolic values. For example, brands such as Nike in
sportswear or Apple in technology invest heavily in marketing strategies
to enhance the symbolic meanings of their products. Such strategies
encourage consumers to identify with products and develop brand loyalty.
In this context, consumer exposure to persuasive marketing messages can
sometimes be seen not as an interference with autonomy but as a means
of expressing individual preferences. However, a critical distinction must
be made: while persuasive marketing provides consumers with options and
supports their capacity to make informed choices, it also carries the risk of
eroding autonomy through manipulative and misleading tactics.

4.1. Ethical Perspectives: The Erosion of Consumer Autonomy


The marketing discipline has often been criticized for violating consumer
autonomy (Hackley, 2009). Consumers value the ability to choose products
and services that align with their personal preferences as an essential aspect
of autonomy (Anker, 2020). However, marketers’ infringement on this
autonomy raises ethical concerns. For instance:
- Violations of ethical transparency principles,
- Disrespect for consumer dignity and rights,
- Encouragement of the consumption of products that disregard
environmental sustainability.
Such instances give rise to serious ethical concerns regarding autonomy.
The various methods used by marketing professionals to influence
consumers’ decision-making processes highlight the central role of autonomy
in marketing ethics (Anker, 2020; Arrington, 1982; Crisp, 1987; Sunstein,
2016; Thaler & Sunstein, 2009). This underscores that the preservation of
consumer autonomy is not only a matter of individual preferences but also a
critical aspect of the ethical dimension of marketing.
Canan Yılmaz Uz / Seda Arslan | 159

Western Enlightenment thought regards individual free will and autonomy


as fundamental values. This understanding has been linked to economic
theories concerning consumers’ capacity for free choice. Consumers exercise
their autonomy by freely selecting from available options. However, this
autonomy is constrained by factors such as freedom, price, time, and lack
of information. Consumer behavior research has extensively examined
consumers’ efforts to overcome these limitations (Wertenbroch et al., 2020).
The erosion of consumer autonomy is a significant ethical issue, closely
associated with concepts such as consumer rights, information privacy, and
the fight against manipulation. More than one-third of consumers in the
European Union report feeling uninformed and unaware (Eurobarometer,
2011), and a substantial proportion lacks sufficient knowledge about
fundamental consumer rights (EC, 2015). These informational deficiencies
hinder consumers’ ability to exercise their autonomy effectively and make
informed decisions. In this context, marketing strategies and advertisements
may pose a threat to consumer autonomy, as they have the potential to
manipulate and mislead individuals.
According to the Kantian perspective, the actions of individuals lacking
autonomy are not ethically assessable. Kant (1999) argues that an individual’s
capacity to make decisions regarding their own actions is inherently linked
to moral responsibility. In this regard, autonomy is considered an ethical
responsibility. However, in contemporary society, particularly with the rise
of digital marketing and data-driven advertising, safeguarding consumer
autonomy has become increasingly complex. Pragmatist philosophers,
on the other hand, associate autonomy with ethical responsibility and
emphasize that for an individual to act with free will, others must respect
their autonomy (Hyman et al., 2023). This perspective frames autonomy
as an interdependent component of individual freedom and responsibility.
Thus, adopting a philosophical approach to understanding consumer
behavior is crucial when examining the effects of marketing strategies and
their potential interference in individuals’ decision-making processes.
Consumer autonomy should not be viewed solely through the lens of
individual preferences but rather within a broader framework shaped by
social and economic structures. In the modern consumer landscape, the
digitalization and personalization of marketing strategies may significantly
erode consumer autonomy. Specifically, algorithms, artificial intelligence,
and data-driven marketing techniques can obstruct consumers from making
conscious choices. This situation underscores the need to redefine ethical
boundaries to ensure the protection of consumer autonomy. From an ethical
standpoint, consumers should be provided with transparent, accurate, and
160 | The Erosion of Consumer Autonomy

comprehensive information, allowing them to make choices free from


manipulation.
Modern digital environments contain significant elements that threaten
consumer autonomy. For instance, tracking personal data and utilizing it to
deliver personalized offers may constitute a violation of individual privacy.
The collection, use, and sharing of personal data often occur without
consumer consent or awareness. Ethically, such data usage should be entirely
transparent and based on consumer approval. This suggests that preserving
consumer autonomy is not only contingent upon corporate transparency
but also on enhancing consumers’ digital literacy, enabling them to
make informed choices. Consumer autonomy is threatened not only by
manipulation and deception but also by issues such as lack of information,
power imbalances, and privacy violations. The loss of consumer autonomy
represents a profound ethical issue, and addressing this challenge necessitates
the implementation of fairer, more transparent, and more conscientious
marketing strategies. Ethical responsibility requires both governments
and corporations to address these issues and take stronger steps toward
safeguarding consumer autonomy.

4.2. Consumer Autonomy in the Digital World and the Erosion of


Consumer Autonomy
In digital platforms, particularly in areas such as e-commerce and
social media, the collection of personal data and the presentation of
customized content based on this data have the potential to influence
consumer preferences. Online consumers navigate increasingly complex
and information-dense environments in their decision-making processes.
However, these environments do not only weaken consumers’ ability to
make choices—and thus their autonomy—due to information overload and
cognitive stress; the deliberate manipulation strategies employed by digital
platforms further complicate this process (Mik, 2016).
In the modern consumer world, the nature of marketing strategies
includes numerous elements that may contribute to the erosion of consumer
autonomy. Cunningham (2003) asserts that a marketer cannot force
consumers to accept existing attitudes or change their preferences. However,
the rise of digital technologies and data-driven advertising has increasingly
blurred these boundaries. Algorithms and personalized marketing techniques
not only predict consumer preferences but also develop strategies to shape
them. Rather than supporting consumer autonomy, this situation holds the
potential to erode it. Consumers may believe they are making choices in line
Canan Yılmaz Uz / Seda Arslan | 161

with their own desires, yet due to the manipulations and directives they are
exposed to, they may unknowingly be steered toward certain preferences.
Even if a consumer initially has no interest in a product, continuous exposure
to advertisements and algorithmic recommendations may direct them toward
it. From this perspective, it can be argued that consumers do not always
engage in rational decision-making but rather act within the alternatives
presented to them. Consumers guided in this manner inadvertently become
trapped in specific consumption patterns, further restricting their ability to
make conscious choices. In this context, the question arises as to whether
personalized marketing truly serves the interests of consumers.
Marketing researchers study how consumers perceive the digital
ecosystem and how they behave within it. While artificial intelligence
(AI) provides significant advantages to consumers, research suggests that
the growing influence of AI and machine learning tools, along with the
increasing dominance of online platforms, threatens consumer freedom of
choice and personal autonomy. In particular, the delegation of decision-
making processes to AI has led to the emergence of a phenomenon known
in the literature as “modified consumers.” This concept implies that
choices in the shopping process are no longer entirely under individual
control, and consumption preferences play a diminishing role in identity
formation through conscious decisions and personal effort (Sevastianova,
2023). Although AI and machine learning technologies facilitate consumer
decision-making processes, they simultaneously threaten consumer
autonomy. By predicting consumer decisions based on past data, these
technologies may limit the ability to make free choices, thereby creating a
“lock-in effect.” For instance, even if a consumer wishes to adopt a healthier
lifestyle, AI may recommend unhealthy products based on past purchasing
habits. Additionally, AI-generated recommendations do not offer consumers
opportunities for negotiation or alternative choices, leading to the erosion
of autonomy. In the long run, this process may result in consumers losing
their ability to engage in independent thought and decision-making. The
weakening of choice and autonomy may cause consumers to make less
relevant decisions, develop hostility toward new technologies, or experience
a sense of learned helplessness. These reactions negatively impact consumers’
ability to make independent choices and further diminish their autonomy.
The impact of AI and machine learning tools on consumer autonomy
varies depending on the degree to which consumer decisions are linked to
personal identity. If a consumer bases decisions on personal identity, values,
or lifestyle, AI-generated recommendations may significantly undermine
autonomy. Additionally, cultural and individual differences must be
162 | The Erosion of Consumer Autonomy

considered. Consumers’ trust in AI—particularly in human-like technologies


such as voice assistants and robots—plays a crucial role. The constraints
imposed by these tools on choice and autonomy influence how consumers
respond to such limitations; some consumers may react more strongly to
these restrictions (Sevastianova, 2023).
Law literature frequently associates AI advancements with concerns about
technological dominance, where computers exert control over humans.
However, these concerns are often dismissed as exaggerated predictions,
overlooking the fundamental issue at hand. The core problem lies not in
technology serving as a mere tool but in its facilitation of power imbalances,
allowing certain actors to gain dominance over others. In the context of
online commerce, some entities have the potential to exert control over their
counterparts, reinforcing asymmetries in access to information and power. In
this regard, the growing control of specific actors over information and power
through technology leads to the erosion of consumer autonomy (Pasquale,
2007). This poses a critical issue that should not be overlooked. With the
proliferation of online commerce, consumer decisions now encompass not
only simple choices—such as purchasing books or electronic devices—but
also high-risk and complex financial transactions. The digital mediation of
decisions regarding insurance plans and financial products increases the risk
of consumers being influenced by technological guidance. It is crucial to
highlight that the issue is not merely about temporary consumer frustrations
stemming from paying higher prices for certain products. The fundamental
concern is that technological guidance directly affects individuals’ capacity
for conscious and autonomous decision-making, systematically shaping their
choices. This demonstrates that consumer autonomy is being significantly
eroded and that the manipulative potential of digital environments is
becoming an increasingly serious threat.
The erosion of consumer autonomy has become a significant dimension
of the dynamics of online commerce. Online environments, mediated by
technology, present various factors that directly influence consumer decision-
making processes. Manipulated information inherent in online commerce
weakens consumer autonomy. A consumer, expected to make informed
decisions, becomes dependent on algorithmically driven and marketing-
influenced content. The way consumers perceive online marketplaces
and products is largely shaped by the strategies employed by online
businesses. The design of online businesses is a deliberate effort to influence
consumer behavior. The prioritization of certain content while making
other content less accessible restricts consumer choices, thereby weakening
their autonomous decision-making abilities. Consumers make decisions
Canan Yılmaz Uz / Seda Arslan | 163

based solely on the options presented to them, encountering difficulties in


accessing alternatives and comprehensive information (De Mul and Van
Den, 2011). Notably, consumer attention is becoming an increasingly scarce
resource in digital environments. Digital platforms employ various strategies
to direct consumers’ limited attention toward specific products or services,
leading to decisions influenced by external factors rather than independent
reasoning. As a result, consumers’ ability to think independently and make
autonomous choices diminishes, as external influences frequently intervene
in their decision-making processes. The strategies employed in online
commerce and digital marketing create an environment that erodes consumer
autonomy. The exposure of consumers to a limited range of content restricts
their decision-making processes, ultimately leading to the loss of individual
autonomy. In the long term, this may result in consumer behavior becoming
more predictable and controllable (Wertheimer, 2014).
Initially, technology is often employed to “optimize user experience” or to
create “frictionless transaction processes.” However, it is frequently overlooked
that these optimizations primarily serve businesses rather than consumers. In
theory, digital environments are expected to provide consumers with more
choices, greater information access, and lower prices. In practice, however,
these environments tend to limit choices, restrict access to information, and
reduce consumer surplus. Online businesses influence consumer behavior
through various technological interventions that determine how and when
information is presented. This results in an unprecedented power imbalance
between the parties involved in transactions, raising significant concerns
not only about the extent of procedural exploitation permitted by contract
law and the adequacy of existing consumer protection regulations but also
about the broader impact of technology on consumer autonomy. Ultimately,
technology is never neutral: Depending on how it is utilized, it may either
preserve and enhance consumer autonomy by strengthening the ability to
make informed choices or restrict autonomy by imposing externally dictated
preferences (Mik, 2016).

5. Conclusion
The preservation of consumer autonomy necessitates the redefinition
of the ethical boundaries of persuasive marketing strategies. Creating a
consumer environment in which individuals are fully informed, their choices
remain independent of manipulative influences, and they can make decisions
of their own free will emerges as a critical requirement from both an ethical
perspective and the standpoint of long-term sustainability. In this regard, the
erosion of consumer autonomy should be considered not only as a matter of
164 | The Erosion of Consumer Autonomy

individual freedom but also as a fundamental issue affecting the democratic


consumer culture.
In the past, technology was merely a tool used to achieve specific goals
and objectives; however, today, it has transcended this role, evolving into a
mechanism that grants certain actors access to information and power. This
transfer of power provides businesses and digital platforms with data that
influence consumer preferences, enabling them to utilize this information
in line with their own interests. It is evident that the power conferred
by technology is not always distributed equally and fairly, resulting in a
pronounced power asymmetry among various stakeholders. This imbalance
allows certain actors to exert a greater influence on consumers. Consequently,
rather than making conscious and independent choices, consumers exposed
to such mechanisms tend to act according to external directives shaped by
these influences. By collecting consumer data and analyzing behavioral
patterns, these actors can predict future consumer actions, thereby guiding
and manipulating decision-making processes. Consumers subjected to
such manipulation—whether consciously or unconsciously—experience a
weakening of their free will, and their capacity for independent decision-
making is significantly eroded. The resulting asymmetrical power structure
hinders consumers from making choices based on their own preferences and
aligning their decisions with their individual needs and desires. Data-driven
algorithms and targeted personalized advertisements restrict the number and
diversity of options available for consideration, thereby shaping decision-
making processes. Consequently, consumers’ ability to make informed and
autonomous choices is progressively weakened.
Consumer autonomy ensures that individuals can make conscious and
independent decisions based on their personal motivations and needs,
thereby strengthening their ability to accept or reject marketing offers
(Brenkert, 2008). This concept also holds a significant position in European
Union consumer law. Specifically, under the Unfair Commercial Practices
Directive (EUR-LEX, 2005), a commercial practice may be deemed unfair if
it significantly impairs, or has the potential to impair, the freedom of choice
or decision-making of the average consumer regarding a product and if such
an impairment results in, or is likely to result in, a transactional decision
that the consumer would not otherwise have made. This regulation aims to
protect consumer autonomy and ensure that consumers can make decisions
freely, without being subject to manipulative influences. In this context,
safeguarding consumer rights, implementing fair marketing strategies, and
preventing consumers from being rendered vulnerable to manipulation must
be reinforced through legal regulations and strategic policies. Preventing the
Canan Yılmaz Uz / Seda Arslan | 165

erosion of consumer autonomy requires a strong focus on the effectiveness


of legal measures and consumer protection policies in this domain. Defining
ethical and legal boundaries in marketing necessitates an approach that
supports fair and informed decision-making processes while safeguarding
consumer autonomy from potential threats.
166 | The Erosion of Consumer Autonomy

References
Anderson, W.T., Challagalla, G.N., & McFarland, R.G. (1999). Anatomy of ex-
change. Journal of Marketing Theory and Practice, 7(4), 8–19.
André, Q., Carmon, Z., Wertenbroch, K., Crum, A., Frank, D. H., Goldstein,
W., Huber, J. C., van Boven, L., Weber, B., & Yang, H. (2018). Consu-
mer choice and autonomy in the age of artificial ıntelligence and big data.
Customer Needs and Solutions, 5(1-2), 28–37.
Anker, T. B. (2024). Meaningful choice: Existential consumer theory. Marketing
Theory, 24(4), 591-609.
Anker, T. (2020). Autonomy as license to operate: Establishing the internal and
external conditions of informed choice in marketing. Marketing Theo-
ry, 20(4), 527-545.
Anker, T.B., Kappel, K., & Sandøe, P. (2010). The liberating power of commer-
cial marketing, Journal of Business Ethics, 93(4), 519–30.
Arrington, R.L. (1982). Advertising and behaviour control. Journal of Business
Ethics 1(1), 3-12.
Aydın, A. E. & Dogan, V. (2023). Kriz ortamında tüketici ataleti: Whatsapp
kişisel veri düzenlemesi üzerine bir araştırma. Pazarlama ve Pazarlama
Araştırmaları Dergisi, 16(1), 61-82.
Bagozzi, R.P. (1975). Marketing as exchange. Journal of Marketing, 39(4),
32-39.
Bakos, Y., Marotta-Wurgler, F., & Trossen, D.R. (2014). Does anyone read the
fine print? Consumer attention to standard-form contracts. Journal of Le-
gal Studies 43(1), 1–35.
Barrett, R. (2000). Market arguments and autonomy. Journal of Philosophy of
Education 34(2), 327–341.
Bauman, Z. (1988). Freedom. Open University Press.
Baumeister, R.F. (2002). Yielding to temptation: Self-control failure, impulsive
purchasing, and consumer behavior. Journal of Consumer Research, 28(4),
670–676.
Bendapudi, N. & Leone, R. P. (2003). Psychological implications of customer
participation in co-production. Journal of Marketing, 67(1), 14–28.
Bishop, J.D. (2000). Is self-identity image advertising ethical? Business Ethics
Quarterly, 10(2), 371–398.
Brenkert, G. (2008). Marketing ethics. Blackwell Publishing.
Caruana, R. & Crane, A. (2008). Constructing consumer responsibility: Explo-
ring the role of corporate communications. Organization Studies, 29(12),
1495–1519.
Canan Yılmaz Uz / Seda Arslan | 167

Chan, T.K.H., Cheung, C.M.K. & Lee, Z.W.Y. (2017). The state of online im-
pulse-buying research: A literature analysis. Information & Management
54(2), 204–217.
Chen, Y.F. & Wang, R.Y. (2016). Are humans rational? Exploring factors influ-
encing impulse buying intention and continuous impulse buying intenti-
on. Journal of Consumer Behaviour, 15(2), 186–97.
Cluley, R. (2019). The politics of consumer data. Marketing Theory, 20(1),
45–63.
Crisp, R. (1987). Persuasive advertising, autonomy, and the creation of desire.
Journal of Business Ethics, 6(5), 413-418.
Cunningham, A. (2003). Autonomous consumption: Buying into the ideology
of capitalism. Journal of Business Ethics, 48(3), 229–236.
Deci, E. L. & Ryan, R. M. (1985). Intrinsic motivation and self-determination in
human behavior. Plenum.
De Mul, J., & van den Berg, B. (2011). Remote control: Human Autonomy
in the Age of Computer-Mediated Agency. In Law, Human Agency and
Autonomic Computing (pp. 46-63). Routledge.
EC (European Commission) (2015). Consumer Conditions Scoreboard: Con-
sumer at Home in the Single Market. URL (consulted 7 June 2019):
[Link]
ard-consumers-home-single-market-2015-edition_en.
Ertemel, A. V. & Pektaş, G. Ö. E. (2018). Dijitalleşen dünyada tüketici davranış-
ları açısından mobil teknoloji bağımlılığı: Üniversite öğrencileri üzerine
nitel bir araştırma. Yıldız Sosyal Bilimler Enstitüsü Dergisi, 2(2), 18-34.
EUR-LEX (2005). Directive 2005/29/EC of the European Parliament and of
the Council of 11 May 2005 Concerning Unfair Business-to-Consumer
Commercial Practices in the Internal Market (Unfair Commercial Practi-
ces Directive). URL (consulted 7 June 2019): [Link]
LexUriServ/[Link]?uri¼OJ:L:2005:149:0022:0039:en:PDF.
Eurobarometer (2011). Consumer Empowerment, TNS Opinion & Social, Spe-
cial Eurobarometer 342/Wave 73.2 and 73.3. Survey requested by Euros-
tat and the Directorate-General for “Health and Consumers” (DG SAN-
CO) and coordinated by the Directorate-General for Communication.
Fassiaux, S. (2023). Preserving consumer autonomy through European Union
regulation of artificial intelligence: A long-term approach. European Jour-
nal of Risk Regulation, 14(4), 710-730.
Frankfurt, H.G. (1971). Freedom of the will and the concept of a person. Jour-
nal of Philosophy, 68(1), 5-20.
Gümüş, B. & Gegez, E. E. (2017). Değişen tüketici kültüründe yeni trend:
Ortak tüketim. Pazarlama ve Pazarlama Araştırmaları Dergisi, 10(20),
155-178.
168 | The Erosion of Consumer Autonomy

Hackley, C. (2009). Marketing: A critical introduction. Sage Publications Inc.


Hoch, J.S. & Loewenstein, G.F. (1991). Time-inconsistent preferences and con-
sumer self-control, Journal of Consumer Research 17(4), 492–507.
Houston, F.S. & Gassenheimer, J.B. (1987). Marketing and exchange, Journal
of Marketing, 51(4), 3–18.
Hyman, M. R., Kostyk, A. & Trafimow, D. (2023). True consumer autonomy:
A formalization and implications. Journal of Business Ethics, 183(3),
841-863.
Hyun, I. (2001). Authentic values and individual autonomy. Journal of Value
Inquiry, 35, 195–208.
Kane, R. (2011). The Oxford handbook of free will. New York: Oxford University
Press.
Kant, I. (1999). The Cambridge edition of the works of Immanuel Kant: Practical
philosophy. Cambridge University Press.
Lee, L., Lee, M. P., Bertini, M., Zauberman, G., & Ariely, D. (2015). Money,
time, and the stability of consumer preferences. Journal of Marketing Re-
search, 52(2), 184-199.
Lusch, R.F. & Watts, J.K.M. (2018). Redefining the market: A Treatise on exc-
hange and shared understanding. Marketing Theory, 18(4), 435–449.
Lüdicke, M. K. (2006). A theory of marketing: Outline of a social systems perspecti-
ve. Deutscher Universitäts-Verlag.
Mik, E. (2016). The erosion of autonomy in online consumer transactions. Law,
Innovation and Technology, 8(1), 1-38.
Moser, C. (2018). Impulse buying: Interventions to support self-cont-
rol with e-commerce’. Extended Abstracts of the 2018 CHI Conferen-
ce on Human Factors in Computing Systems, Canada, 1 – 4. [Link]
org/10.1145/3170427.3173026
Nixon, E. & Gabriel, Y. (2016). So much choice and no choice at all: A soci-
o-psychoanalytic interpretation of consumerism as a source of pollution.
Marketing Theory 16(1), 39–56.
Oshana, M. (1998). Personal autonomy and society. Journal of Social Philosophy,
29(1), 81–102.
Pasquale, F. (2007). Technology, competition, and values. Minn. JL Sci. &
Tech., 8, 607.
Pietarinen, J. (1994). Itsemäärääminen ja itsemääräämisoikeus. In J. Pietarinen,
V. Launis, J. Räikkä, E. Lagerspetz, M. Rauhala & M. Oksanen (Eds.),
Oikeus itsemääräämiseen (pp., 15-47). Painatuskeskus.
Räikkä, J. (1999). On morality of avoiding information. In V. Launis, J. Pietari-
nen, & J. Räikkä (Eds.), Genes and morality (pp. 63–75). Rodopi.
Canan Yılmaz Uz / Seda Arslan | 169

Raley, Y. (2006). Food advertising, education, and the erosion of autonomy. The
International Journal of Applied Philosophy, 20(1), 67–79.
Schneider-Kamp, Anna, & Askegaard, S. (2020). Putting patients into the cent-
re: Patient empowerment in everyday health practices. Health, 24 (6),
625–645.
Sevastianova, V. N. (2023). Trademarks in the age of automated commerce:
Consumer choice and autonomy. IIC-International Review of Intellectual
Property and Competition Law, 54(10), 1561-1589.
Siipi, H. and Uusitalo, S. (2011). Consumer autonomy and availability of ge-
netically modified food. Journal of Agricultural and Environmental Ethics,
24(2), 147–163.
Sneddon, A. (2001). Advertising and deep autonomy. Journal of Business Ethics,
33(1), 15–28.
Strack, F., Werth, L., & Deutsch, R. (2006). Reflective and impulsive deter-
minants of consumer behavior. Journal of Consumer Psychology, 16(3),
205-216.
Sunstein, C. R. (2016). Fifty shades of manipulation. Journal of Marketing Be-
havior, 1(3–4), 214–244.
Thaler, R. H. & Sunstein, C. R. (2009). Nudge: Improving decisions about health,
wealth, and happiness. Penguin.
Verplanken, B. & Sato, A. (2011). The psychology of impulse buying: An in-
tegrative self-regulation approach. Journal of Consumer Policy, 34(2),
197–210.
Villarán, A. (2017). Irrational advertising and moral autonomy. Journal of Busi-
ness Ethics 144 (3), 479–490.
Wertenbroch, K., Schrift, R. Y., Alba, J. W., Barasch, A., Bhattacharjee, A., Gies-
ler, M., Knobe, J., Lehmann, D.R., Matz, S., Nave, G., Parker, J.R.,
Puntoni, S., Zheng, Y., & Zwebner, Y. (2020). Autonomy in consumer
choice. Marketing Letters, 31, 429-439.
Wertheimer, A. (2014). Against autonomy? Journal of Medical Ethics, 40(5),
351-352.
Wood, M. (1998). Socio-economic status, delay of gratification, and impulse
buying. Journal of Economic Psychology, 19(3), 295–320.
Yi, S. & Baumgartner, H. (2011). Coping with guilt and shame in the impulse
buying context. Journal of Economic Psychology, 32(3), 458–67.
Zhu, T., Zhang, L., Deng, H., Liu, C. & Liu, X. (2024). Consumer autonomy:
A strategy to alleviate the self-serving bias in tourism value co-creati-
on. Journal of Hospitality and Tourism Management, 60, 72-81.
170 | The Erosion of Consumer Autonomy
Chapter 10

Artificial Intelligence and The Unfairness of


Pricing Strategies

Aylin Atasoy1

Abstract
The rapid advancement of artificial intelligence (AI) and digital technologies
has transformed pricing strategies, enabling firms to implement algorithmic
and dynamic pricing models. While these strategies enhance efficiency and
profitability by leveraging big data and predictive analytics, they also raise
significant ethical concerns. This study explores the fairness of AI-driven
pricing, particularly in the context of personalized pricing strategies that
adjust prices based on consumer data. Drawing from theoretical frameworks
such as price fairness, distributive justice, and trust theory, the study examines
consumer reactions to algorithmic pricing and the implications for long-term
business-consumer relationships.
Empirical evidence suggests that personalized pricing can lead to perceptions
of unfairness, especially when consumers are unaware of price differentiation
or feel manipulated. While businesses argue that data-driven pricing
enhances market efficiency, critics highlight risks such as privacy violations,
algorithmic biases, and economic discrimination. Furthermore, AI-driven
pricing strategies may exacerbate social inequalities, particularly when used
in essential services such as transportation and healthcare.
This study underscores the need for balancing profit-driven pricing
models with ethical considerations to maintain customer trust and social
responsibility. As AI continues to shape market dynamics, a responsible
approach to algorithmic pricing will be essential in fostering ethical business
practices and ensuring long-term sustainability.

1 İstanbul Gelişim Üniversitesi, İİSBF Havacılık Yönetimi

[Link]
171
172 | Yapay Zekâ ve Fiyatlandırma Stratejilerinin Adaletsizliği

1. Introduction
The development of digital technologies has changed many dynamics
in the business world, and marketing has been part of this change. First
and foremost, digital information technologies have made it much easier to
access consumer data and use it to make decisions much more quickly. It is
understood that marketing will have to be based on data, using predictive
and contextual models, using the capabilities of artificial (or augmented)
intelligence, augmented reality, becoming augmented marketing( Reis,
2022, p.8).
In Global audit, consulting and research firm Deloitte’s Digital Marketing
2025 report (2024, p.4) we see that for CMOs, the top three priorities in
the face of existing potential economic challenges are, firstly, accelerating the
transition to new technologies such as AI, secondly, growth, expansion into
new markets, segments and geographies, and thirdly, implementing systems
and/or algorithms to improve customer personalisation.
It is noteworthy that two of the top three priorities of senior marketing
executives are digitalisation and improving customer personalisation.
The use of digital technologies has facilitated the tracking of customers’
consumption habits and purchasing behaviour, enabling the provision of
special offers, particularly with the use of personalised prices to enhance their
appeal. Information technologies enable businesses to collect vast quantities
of customer data at negligible cost and on a full-time basis (DalleMule
& Davenport, 2017, p.112). This data can then be analysed to create
sophisticated pricing strategies and personalised price recommendations
based on these strategies (Priester et al., 2020, p.99).
In contemporary business organisations, there is a growing prevalence of
units dedicated to the management of data, in addition to the establishment of
marketing departments. These departments facilitate the creation of bespoke
and personalised offers for customers, with these offers being informed
by the data collected about the customers in question. The utilisation
of sophisticated software and applications facilitates the aggregation of
internet search behaviour, GPS location, and the diverse digital footprints
emanating from individuals’ digital devices. Through the analysis of this
data, organisations are able to personalise advertisements, products, and
services, particularly in regard to pricing, aligning with the specific needs
and preferences of customers (Dubus, 2024, p.1).
The present study will focus on the extent to which the use of artificial
intelligence in dynamic pricing systems is fair and ethical in terms of
Aylin Atasoy | 173

personalised price offers to consumers. Recent research on this subject will


be referenced. The first section will emphasise the concept of price fairness.
The second section will discuss research results on dynamic pricing and the
use of artificial intelligence. The third section will discuss pricing strategies
created by using data collected through artificial intelligence from an ethical
perspective.

2. Price Fairness Concept


In early 2000, consumers noticed that Amazon was listing a DVD at
different prices for different users. They then complained extensively on the
company’s chat boards and pressured the company to stop using the strategy
of offering customers different prices for the same product (Lyn Cox, 2001,
p. 264). A more recent case of differential pricing occurred on a travel
platform. On the same day, a customer requested a quote from the same
hotel for accommodation with three different brands of phones and the
company offered different offers for each phone. Interestingly, the Iphone
brand phone was offered higher than the others. Similarly, a passenger
wanted to book a seat in the same class on the same flight for himself and
his mother in 2022. However, the system offered his mother a cheaper price
than him. In the face of different pricing for the same product, the customer
expressed dissatisfaction with the unequal treatment as well as the unequal
pricing (Ying et al. 2024, p.1).
Price fairness perceptions are influenced by multiple theoretical
[Link] Dual Entitlement Principle (Kahneman, Knetsch,
& Thaler, 1986) suggests that consumers expect fairness in transactions,
accepting price increases due to rising costs but rejecting those solely for profit
maximization. Equity Theory (Adams, 1965) and Distributive Justice
(Homans, 1961) emphasize fairness based on input-output comparisons,
where paying more than others for the same product is perceived as unjust.
Procedural Justice (Thibaut & Walker, 1975) highlights the role of
transparent and logical pricing mechanisms in shaping fairness perceptions.
Similarly, Social Comparison Theory (Major & Testa, 1989) suggests that
consumers judge fairness by comparing their price with others’. Attribution
Theory (Weiner, 1985) explains that perceived fairness depends on whether
price changes are attributed to controllable or external factors. Trust Theory
(Mayer, Davis, & Schoorman, 1995) posits that consumer trust moderates
reactions to pricing, with loyal customers being more tolerant unless they
feel betrayed. Finally, Perceived Fairness & Emotions (Campbell, 2004)
highlights the emotional dimension of fairness, where perceived price
174 | Yapay Zekâ ve Fiyatlandırma Stratejilerinin Adaletsizliği

unfairness can trigger negative reactions such as anger, complaints, or


negative word-of-mouth (Xia et al. 2004, p.1)
In the contemporary context, customers encountering varied prices for
similar products on travel platforms may discern that these fluctuations are
precipitated by numerous factors. Nevertheless, the rationales underlying
price changes are frequently opaque, particularly in the context of transparent
pricing practices being uncommon in the travel industry (Chung & Petrick,
2013). This engenders the perception of price fairness becoming a pivotal
issue for both consumer experience and business interests. Personalised
pricing has been shown to erode consumer loyalty and diminish purchase
intentions by eliciting feelings of unfairness (Richards et al., 2016). In
the long term, such practices can adversely impact corporate interests.
Furthermore, while some tourists may exhibit self-protective or vindictive
behaviour in response to price injustice, others may choose to remain
indifferent. For instance, the study on the perception of price fairness on
online travel platforms, published in 2024, concluded that the pricing
practices of travel platforms are not yet aligned with customers’ expectations
of market fairness, suggesting that platforms should act in accordance with
industry norms and ethical standards to maintain consumer trust (Ying et
al., 2024, p. 9).
Trough the theoretical foundation for understanding how consumers
perceive price fairness it is obvious that fairness judgments are not solely based
on price levels but also on the rationale behind price changes, transparency,
social comparisons, and emotional responses. Consumers accept price
increases when they are justified by external factors, such as rising costs, but
view them as unfair when they appear to be driven purely by profit motives.
Social and comparative dimensions also play a crucial role, as individuals
evaluate fairness relative to what others pay. Additionally, procedural aspects,
such as transparency in pricing, influence fairness perceptions. Trust in the
seller moderates consumer reactions, with loyal customers showing more
tolerance unless they feel deceived. (Khandeparkaret. al 2020). Ultimately,
fairness perceptions are not only cognitive but also emotional, meaning that
unfair pricing can lead to strong negative responses, such as complaints
and negative word-of-mouth (van Boom, [Link] 2020).. This interpretation
highlights the complexity of price fairness judgments and their implications
for consumer behavior.
In the contemporary context, digital information technologies that
process data collected through machine learning-based algorithms are
supported by artificial intelligence to guide pricing strategies. These digital
Aylin Atasoy | 175

technologies are both faster and more accurate than the work to be done with
human intelligence and are competent enough to manage the perception of
customers. However, on the other hand, they also affect the perception of
price [Link] this context, it is important to know which factors affect the
perception of price fairness. The following components, as outlined by the
framework developed by Xia et al. (2004, pp. 1-2), have been identified as
influential factors in determining consumers’ perceptions of price fairness:
i) transaction similarity and the selection of the comparison party;
ii) the allocation of costs and profits with the concomitant attribution of
responsibility;
iii) the status of the buyer-seller relationship (trust); and
iv) knowledge, beliefs, and social norms.
The interplay of these factors collectively influences consumers’ cognitive
and emotional perceptions of price fairness. In addition, depending on the
perceived value and emotions, they also affect the decision-making process
of consumers. Some consumers may not take any action, while others may
consider taking revenge. Some customers even report deficiencies related to
price unfairness and socially unfair behaviour (Martin et al. 2009).

3. Dynamic Pricing and Artificial Intelligence


The concept of dynamic pricing, which gained prominence in the 1980s
following its successful implementation by American airlines, also resulted
in the adoption of algorithmic pricing. While the mathematical concepts
and models underpinning dynamic pricing can be traced back to the mid-
twentieth century, it was the seminal scientific papers of Peter Belobaba
(1987, 1989) in the late 1980s and early 1990s that generated increased
interest in practical studies (Seele et al. 2019, 700).
Personalised pricing is predicated on the utilisation of algorithmic
pricing, a practice that airline companies have employed in revenue
management software for a considerable duration. In its nascent form, the
software’s pricing mechanisms were governed by instructions provided by
a [Link] algorithms, however, are driven by artificial
intelligence and exhibit a marked increase in autonomy when compared with
their antecedents. These advanced algorithms have evolved to formulate
pricing strategies through active experimentation and in accordance with
the evolving or changing environment. They demonstrate a high degree
of autonomy and require minimal or no instruction from an external
programmer. However, the employment of algorithms in pricing strategies
176 | Yapay Zekâ ve Fiyatlandırma Stratejilerinin Adaletsizliği

gives rise to legal and ethical concerns. These algorithms may be designed
to orchestrate price increases or diminished competition, obviating the need
for direct communication or agreement (Calvano et al., 2020, pp.3-4). This
may present challenges in terms of competition law and consumer rights,
potentially necessitating the establishment of regulatory frameworks to
promote the development of more transparent and auditable algorithms.
There are different types of algorithmic pricing. The best known of these
is dynamic pricing. Dynamic pricing, sometimes referred to as surge, yield,
or real-time pricing, refers to the practice of dynamically adjusting prices to
realise revenue gains when responding to a specific market situation with
uncertain demand. Personalised pricing can also be seen as first-order price
discrimination, customised or targeted pricing and represents a pricing
strategy in which ‘firms charge different prices to different consumers
according to their willingness to pay’ (Seele et al. 2019, pp. 699).
Demand forecasting, flexibility and the willingness to pay are pivotal to
a profitable pricing strategy. For instance, a study conducted in the context
of grocery retailers (Srinivasan et al., 2008) found that demand assessments,
rather than changing prices from week to week based on wholesale costs and
competition, lead to higher profits. Dynamic pricing also micro-segments
the market by person, product, period and location in order to adjust the
price. Prices are adjusted as these four basic dimensions change. (Kopalle et
al. 2023, p.581)
Industries such as supermarkets, airlines and credit card companies
collect traces left by individual consumer transactions in large databases to
examine purchasing patterns and offer personalised price offers through
targeted marketing strategies. On the one hand, there are those who
argue that consumers served at higher prices have the potential to affect
competition and that this situation will lead companies to abandon dynamic
pricing. However, an analysis of actual market behaviour reveals that price
adjustments based on customer segments do not necessarily result in reduced
profits for companies, even when consumers are aware of these strategies
(Laussel & Resende, 2022).
Algorithmic pricing has emerged as a crucial aspect of dynamic pricing
in response to changes in customer segments. Wang, Li, and Kopalle (2022)
define algorithmic pricing as the use of artificial intelligence algorithms by
businesses to identify, analyze, and offer personalized prices to consumers.
Today, companies equipped with advanced big data analytics can effortlessly
track consumers’ digital footprints to determine their preferences. In the
retail sector, Target analyzes customers’ past shopping behaviors to provide
Aylin Atasoy | 177

personalized discount coupons. Similarly, in the travel and hospitality sector,


Orbitz engages in price discrimination by tracking online browsing activities
The use of consumer data is not limited to online retailers but extends to
physical stores as well. For instance, Amazon’s cashierless “Amazon Go”
stores utilize cameras and sensors to identify customers, monitor their in-
store movements and product interactions, and offer personalized discounts
(Vandervoort, (2024). In China, particularly among luxury brands, stores
employ facial recognition technology at entrances to identify individuals and
implement personalized pricing strategies (Wong, 2018).

4. Ethical Aspect of Pricing Strategies


The use of algorithmic pricing and data-driven personalisation in
competitive markets has ethical implications for customer privacy. While
competition requires independent decision-making, Gal (2019) highlights
how algorithms now enable autonomous price coordination, potentially
leading to implicit collusion among competitors. This raises legal concerns,
particularly when algorithms are designed to react to competitors’ pricing
decisions in a way that maintains coordinated market outcomes.
Simultaneously, businesses leverage vast data resources to enhance
personalized marketing strategies, shifting from broad customer
segmentation to individualized targeting. However, Turow (2017, pp. 247–
248) in his book “The aisles have eyes: How retailers track your shopping, strip
your privacy, and define your power” warns of ethical dilemmas in this practice,
as algorithms may facilitate social discrimination by tailoring messages and
prices based on consumer profiles ‘often without the individuals’ awareness or
consent. These developments underscore the tension between technological
advancements, market fairness, and ethical considerations in modern digital
economies.
It is also addresses the legal accountability of algorithm designers
and users in cases of potential anti-competitive behavior. The European
Commissioner for Competition emphasizes that businesses remain
responsible for the consequences of the algorithms they implement. Legal
liability arises when a company is aware of the algorithm’s pricing effects, as
demonstrated in the Eturas case, where 30 Lithuanian travel agencies used
a shared booking system that restricted discounts. The European Court of
Justice ruled that awareness of the algorithmic restriction was necessary to
establish a cartel agreement, though indirect awareness—such as ignoring
the algorithm’s potential effects—could also be relevant. However, the
legal framework remains unclear regarding situations where algorithms
178 | Yapay Zekâ ve Fiyatlandırma Stratejilerinin Adaletsizliği

autonomously determine pricing strategies and facilitate collusion without


explicit human intervention. This ambiguity raises ongoing legal and ethical
challenges in regulating algorithmic decision-making in competitive markets
(Gal, 2019, p.20).
Personalised pricing is a tool utilised across various sectors, with its
efficacy in enhancing business profitability being particularly pronounced in
contexts characterised by minimal marginal costs of production (Coker &
Izaret, 2021, p.387). To illustrate this point, consider the observation made
by Shiller (2016, p.7), who asserts that Netflix could potentially augment its
profits by up to 15% through the strategic tailoring of its pricing structure
to customers’ web browsing histories.
Steinberg (2020) critiques big-data-driven personalized pricing, arguing
that its exclusive use for profit maximization disrupts the fair distribution
of economic benefits. He asserts that such pricing strategies deepen power
asymmetries between consumers and firms, undermining relational equality
in market transactions. By making it prohibitively difficult for consumers to
compare prices or negotiate, personalized pricing diminishes their agency
as market participants, effectively limiting their ability to make informed
purchasing decisions. This perspective highlights the ethical concerns
surrounding the practice, suggesting that personalized pricing may be
morally indefensible if it violates principles of fairness, equal treatment, and
market accessibility.
It is the right of consumers to demand transparency regarding the benefits
they accrue from specific market practices, particularly in terms of price.
Previous research suggests that consumers’ acceptance of a price depends
on their perception of its fairness, which is judged by whether a transaction
is reasonable, acceptable, or just. Unfair pricing practices trigger negative
consumer reactions, including distrust, reduced purchase intentions, and
increased likelihood of switching to competitors. Moreover, perceived price
unfairness leads to negative word-of-mouth, both privately and publicly,
further harming a company’s reputation and customer loyalty (Hufnagel
et al. 2022, p.347). For consumers, lack of transparency in pricing leads
to the perception of arbitrary pricing, which may lead to scepticism and
questioning of the firm’s credibility.
The increasing role of digitalisation and algorithmic decision-making in
dynamic pricing highlights the technological advances that are transforming
pricing strategies. The rise of online retail, digital travel booking, and
mobile commerce, accelerated by the COVID-19 pandemic, has enabled
real-time, personalized pricing. Innovations such as electronic shelf labels
Aylin Atasoy | 179

in physical stores allow retailers to adjust prices dynamically, bridging the


gap between online and offline pricing. Additionally, the shift from human-
driven to algorithmic-driven pricing decisions has led to autonomous
pricing agents setting prices without direct managerial intervention. This
automation reduces the cost of price adjustments, making dynamic pricing
more accessible and widely adopted (Kopalle et al., 2023, p.589)
On the other hand it must be noted that dynamic pricing enabling price
collusion, which can lead to monopolistic or oligopolistic practices. Legal
cases, such as the 2015 “Poster Cartel” case on Amazon, have demonstrated
how pricing algorithms can be used to maintain price parity among vendors,
effectively preventing price competition. While some cases involve explicit
collusion where vendors coordinate pricing strategies more concerning
is tacit collusion, where autonomous pricing algorithms unintentionally
synchronize prices without direct human intervention. This occurs due
to advanced machine learning techniques, such as reinforcement learning,
which allow algorithms to adjust prices in response to competitors’ pricing
patterns. Two key challenges arise from this: first, existing legal frameworks
focus on human collusion, making algorithm-driven collusion difficult to
regulate; second, the complexity and speed of algorithmic pricing make it
difficult to detect and analyze collusion, requiring extensive computational
resources. These factors present significant ethical and regulatory challenges
in the use of dynamic pricing. (Nunan & Di Domenico, 2022, pp.454-455).
There are studies that argue against the ethics of price customization.
Marcoux (2006) and Elegido (2014) have conducted studies that argue
that it is more ethical to offer the same product to different consumers at
different prices, namely through price customization, with a unitary price
set under open market conditions. A comprehensive review by Coker and
Izaret (2021) opposes these studies and argues that price customization is
more ethical than unitary pricing. Through a structured example involving
these two consumer types, they evaluate price personalization using four
Social Welfare Functions (SWFs)—utilitarian, egalitarian, prioritarian,
and leximin. Their findings indicate that price personalization enhances
overall social welfare across all four SWF perspectives. Ultimately, they
conclude that personalized pricing not only increases total welfare but also
benefits consumers, challenging traditional ethical concerns associated with
differential pricing strategies (Mazrekaj et al.2024)
Besides these studies Mazrekai et al (2024) evaluates the ethical
implications of unitary versus personalized pricing through the lens of four
consequentialist Social Welfare Functions (SWFs). Their findings challenges
180 | Yapay Zekâ ve Fiyatlandırma Stratejilerinin Adaletsizliği

the conclusions of Coker and Izaret (2021), who argued that personalized
pricing is ethically superior due to its ability to increase both utility and
equity. The authors caution that this conclusion is contingent on the
assumption that wealthier individuals derive higher utility from a product.
When this assumption is relaxed, the ethical advantage of personalized
pricing diminishes, particularly if consumers perceive it as unfair or feel their
privacy is violated by AI-driven willingness-to-pay (WTP) estimations. The
study suggests that unitary pricing may often be preferable if personalized
pricing results in a welfare loss, especially when product utility is significant
for lower-income consumers. More broadly, the findings highlight the need
for a nuanced approach to ethical evaluations, as different economic and
behavioral conditions can lead to unexpected reversals in outcomes.
Algorithmic price personalization has an impact on consumer perceptions
of fairness. Zuiderveen Borgesius & Poort (2017, p.354) argue that
consumers feel wronged when charged higher prices than others, perceiving
such practices as unfair or manipulative, which can lead to reduced demand.
Hermann (2022, p.52) further emphasizes the ethical dilemmas associated
with algorithm-driven pricing, particularly its potential to reinforce social
inequalities. When algorithms segment populations based on demographic
factors, they may unintentionally favor or disadvantage certain customer
groups. Biases in algorithmic predictions can stem from skewed data,
including disproportionate representation of certain groups, misleading
proxy variables, or insufficient data, leading to unfair and discriminatory
outcomes. Mazrekaj et al. (2024) reinforce this concern by stressing that
these biases can result in unequal treatment of individuals, raising significant
ethical and fairness-related challenges in algorithmic pricing strategies.
Empirical research consistently demonstrates that consumers perceive
personalised pricing as unfair or manipulative (Anderson & Simester, 2010;
Krämer et al., 2018; Turow et al., 2005). A survey that was held by Turow
and his friends (2005) in USA about online an doffline shopping and price
discrimination. The study reached to 1,500 U.S. adults and revealed that
76% of respondents expressed concern regarding others paying less for the
same product. On the other hand 64% of American adults who have used
the internet for shopping do not know it is legal for “an online store to
charge different people different prices at the same time of day.” 71% don’t
know it is legal for an offline store to do that. And also 75% do not know
that besides a website has a privacy policy, it may share the information of
the visiters with other websites and companies. (Turow et al., 2005, p.3).
Price discrimination is frequently perceived negatively, even when it benefits
the consumer, as evidenced by the fact that 72% of respondents disagreed
Aylin Atasoy | 181

with the notion that stores should offer them lower prices to retain their
loyalty. The perception of unfair pricing has been demonstrated to have
significant consequences, with Anderson and Simester (2010, p.729)
finding in a randomised field experiment involving over 50,000 customers
that consumers who discovered price disparities were less likely to make
future purchases from the retailer. These findings highlight the potential
negative impact of personalised pricing on consumer trust and long-term
business relationships.
According to the study of Krämer et al., (2018) about airline pricing
driven by low-cost carriers, consumer knowledge about personal pricing
is crucial in determining whether they perceive a deal as fair. Resistance
to personal pricing is expected due to concerns over privacy, data sharing,
and perceived price manipulation. In the short term, airlines that refrain
from using personalized dynamic pricing may gain a competitive edge if
customers feel exploited. However, if all major carriers adopt personalized
dynamic pricing, customers may have no alternative but to accept it—much
like how revenue management and advance purchase restrictions became
industry norms despite initial resistance.
Nevertheless, gaining customer acceptance for personalized dynamic
pricing will be more challenging than implementing traditional revenue
management practices. From the airlines aspect to be successful, airlines must
effectively communicate and justify personalized pricing as fair, especially
as privacy and discrimination concerns become widespread. At that point
two key risks require further analysis: first, whether personalized pricing
provides meaningful value to customers despite its economic advantages, and
second, whether the short-term revenue gains from real-time willingness-
to-pay estimation outweigh the long-term risks of damaging customer
relationships. Ultimately, consumer perceptions of fairness (Alderighi et al.,
2022) will be crucial in determining the viability and success of personalized
dinamic prising in the airline industry.
An important disccusion point is the ethical concerns surrounding digital
surveillance and privacy in the context of personalized pricing. Unlike the
“access-view” of privacy, where individuals simply relinquish their data,
people selectively share information with third parties while maintaining
expectations about its scope, access, and usage. Ethical concerns arise when
consumers feel coerced into sharing their data, such as when insurance
companies charge higher premiums to those unwilling to disclose personal
information. Loi et al. (2022, p.8) argue that this a practice that constitutes
psychological coercion. This form of digital surveillance not only undermines
182 | Yapay Zekâ ve Fiyatlandırma Stratejilerinin Adaletsizliği

privacy preferences but also limits individual autonomy, authenticity,


and spontaneity in decision-making. Since personalized pricing relies on
algorithms that estimate a consumer’s willingness-to-pay using collected
data, it may create a sense of being monitored, leading to a perceived loss
of utility (Priester et al., 2020; Turow et al., 2015; Zuiderveen Borgesius
& Poort, 2017). Some individuals may reject data-sharing entirely, not
because of specific consequences, but because they intrinsically value privacy
(Loi et al., 2022). These concerns highlight the ethical and psychological
implications of data-driven pricing strategies.

5. Conclusion
As artificial intelligence and data-driven strategies continue to reshape
pricing mechanisms, the ethical and practical implications of algorithmic
pricing become increasingly significant. While dynamic pricing offers firms
a powerful tool to optimize revenue and balance supply and demand, its
implementation must be approached with caution. The intersection of AI
and pricing strategies presents both opportunities and challenges—ranging
from increased efficiency to concerns over fairness, transparency, and
consumer trust.
Striking a balance between profitability and ethical responsibility is
crucial for businesses aiming to maintain long-term customer relationships.
As discussed, algorithmic pricing can inadvertently lead to consumer
dissatisfaction, particularly when price adjustments appear exploitative or
opaque. In industries where pricing directly affects essential services, such as
transportation and healthcare, the need for responsible governance becomes
even more pronounced. Regulatory oversight, corporate self-regulation, and
interdisciplinary collaboration between scholars and practitioners will play
a pivotal role in shaping the future of fair and effective pricing strategies.
Moving forward, businesses must not only refine their AI-driven
pricing models to enhance accuracy and adaptability but also integrate
ethical considerations into their decision-making processes. Transparent
communication, consumer education, and proactive policy-making will be
essential in ensuring that AI-powered pricing benefits both businesses and
society at large. By fostering a responsible approach to pricing strategy,
firms can harness the advantages of AI while mitigating risks, ultimately
creating a more sustainable and consumer-centric marketplace.
Aylin Atasoy | 183

References
Adams, J.S. (1965), “Inequity in Social Exchange,” in Advances in Experimental
Social Psychology, Vol. 2, L. Berkowitz, ed. New York: Academic Press,
267–299.
Alderighia, M., Navab, C.R., Calabresed,M., Christilled, J.M.&Salveminid;
C.B.(2022).Consumer perception of price fairness and dynamic pri-
cing:Evidence from [Link]. Journal of Business Research, 145,
pp.769–783
Allender,W.J. Liaukonyte, J., Nasser, S.& Richards, T.J. (2021). Price Fairness
and Strategic Obfuscation, Marketing Science, 2021, 40 (1), pp. 122–146.
Anderson, E. T., & Simester, D. I. (2010). Price Stickiness and customer anta-
gonism. The Quarterly Journal of Economics, 125, 729–765.
Belobaba, P. P. (1987). Air travel demand and airline seat inventory management.
Working Paper/Dissertation. Ph.D. dissertation. Cambridge, MA : Fli-
ght Transportation Laboratory, Massachusetts Institute of Technology.
Retrieved from https ://dspac [Link]/handl e/1721.1/68077.
Belobaba, P. P. (1989). OR practice: Application of a probabilistic decision
model to airline seat inventory control. Operations Research, 37(2), 183–
197. https ://[Link]/10.1287/ opre.37.2.183.
Bettray, J., Suessmair, A. and Dorn, T. (2017) Perceived Price Fairness in Pay-
What-You-Want: A Multi-Country Study. American Journal of Industrial
and Business Management, 7, 711-734.
Calvano, E., Calzolari, G., Denicolo, V., Pastorello, S., (2020). Artificial intelli-
gence, algo-rithmic pricing, and collusion. American Economic Review,
110 (10), 3267–3297.
Campbell, M. (2004). Who Says? How the Source of Price Information and the
Direction of Price Change Influence Perceptions of Price Fairness, wor-
king paper, Department of Marketing, University of Colorado, Boulder.
Chung, J. Y., & Petrick, J. F. (2013). Price fairness of airline ancillary fees: An
attributional approach. Journal of Travel Research, 52(2), 168–181. htt-
ps://[Link]/ 10.1177/0047287512457261
Coker, J., & Izaret, J.-M. (2021). Progressive pricing: The ethical case for price
personalization. Journal of Business Ethics, 173, 387–398.
DalleMule, L., & Davenport, T.H., 2017. What’s your data strategy. Harvard
Business Review. 95 (3), 112–121.
Deloitte Digital, (2024). Embracing change and gearing up for the future
[Link]
[Link]
Dubus, A. (2024). Behavior-based algorithmic pricing, Information Economics
and Policy, 66,101081
184 | Yapay Zekâ ve Fiyatlandırma Stratejilerinin Adaletsizliği

Elegido, J. M. (2011). The ethics of price discrimination. Business Ethics Quar-


terly, 21(4), 633–660.
Gal, M.S. 2019. Illegal pricing algorithms. Communications of the ACM 62 (1):
18–20. https ://[Link]/10.1145/32925 15.
Gerlick, J.A. & Liozu, S.M. (2020). Ethical and legal considerations of artifici-
al intelligence and algorithmic decision‑making in personalized pricing,
Journal of Revenue and Pricing Management, 19:85–98.
Hermann, E. (2022). Leveraging artificial intelligence in marketing for social
good - An ethical perspective. Journal of Business Ethics, 179, 43–61. htt-
ps:// doi. org/ 10. 1007/s10551- 021- 04843-y
Homans, G.C. (1961), Social Behavior: Its Elementary Forms. New York: Har-
court, Brace & World.
Hufnagel, G. Manfred Schwaiger, M. & Weritz, L. (2022). Seeking the per-
fect price: Consumer responses to personalized price discrimination in
e-commerce. Journal of Business Research, 143, pp.346–365.
Kahneman, D., Jack L. Knetsch, &Thaler, R. (1986), Fairness and the Assump-
tions of Economics, Journal of Business, 59 (4), 285–300.
Kopalle, P.K., Paulwels, K., Akella, L.Y. & Gangwar, M. (2023). Dynamic
pricing: Definition, implications for managers, and future research direc-
tions. Journal of Retailing, 99, 580–593.
Khandeparkar, K., Maheshwari, B., & Motiani, M. (2020). Why should I
pay more? Testing the impact of contextual cues on perception of pri-
ce unfairness for the price-disadvantaged segment in dual pricing.
Tourism Management, 78, Article 104075. [Link]
tourman.2020.104075
Krämer, A., Friesen, M., & Shelton, T. (2018). Are airline passengers ready for
personalized dynamic pricing? Journal of Revenue and Pricing Manage-
ment, 17, 115–120.
Laussel, D. & Resende, J.(2022) When Is Product Personalization Profit-En-
hancing? A Behavior-Based Discrimination Model. Management Science,
68(12), pp.8872-8888.
Loi, M., Hauser, C., & Christen, M. (2022). Highway to (Digital) Surveillance:
When are clients coerced to share their data with insurers? Journal of Busi-
ness Ethics, 175, 7–19. https:// doi. org/ 10. 1007/ s10551- 020- 04668-1
Lyn Cox, J. (2001). Can differential prices be fair? The Journal of Pro-
duct and Brand Management, 10(5), 264–275. [Link]
org/10.1108/10610420110401829
Major, B. & Testa, M. (1989), Social Comparison Processes and Judgments of
Entitlement and Satisfaction, Journal of Experimental Social Psychology, 25
(2), 101–120.
Aylin Atasoy | 185

Marcoux, A. (2006). Much ado about price discrimination. Journal of Markets


and Morality, 9(1), 57–69.
Martin, W. C., Ponder, N., & Lueg, J. E. (2009). Price fairness perceptions and
customer loyalty in a retail context. Journal of Business Research, 62(6),
588–593. [Link] org/10.1016/[Link].2008.05.017
Mayer, R C., Davis, J:H.& Schoorman, F.D. (1995). An Integrative Model of
Organizational Trust,” Academy of Management Review, 20 (3), 709–734.
Mazrekaj,D.,Verhagen, M.D., Kumar, A. & Daniel Muzio, D. (2024). Does
Price Personalization Ethically Outperform Unitary Pricing? A Thought
Experiment and a Simulation Study. Journal of Business Ethics, [Link]
org/10.1007/s10551-024-05828-3
Nunan,D. & Di Domenico, M.L. (2022). Value creation in an algorithmic wor-
ld: Towards an ethics of dynamic pricing. Journal of Business Research,
150, pp.451–460
Priester, A., Robbert, T. & Roth, S. (2020). A special price just for you: effects
of personalized dynamic pricing consumer fairness perceptions. Journal of
Revenue and Pricing Management, 19, pp.99–112.
Reis, J. L. (2022). Artificial Intelligence impact in marketing. In Silva, P. &
Teixeira, F. (Orgs.). Digital Marketing Trends (pp. 7-10). Porto. CEOS
Publicações.
Richards, T. J., Liaukonyte, J., & Streletskaya, N. A. (2016). Personalized pri-
cing and price fairness. International Journal of Industrial Organization,
44, 138–153. https:// [Link]/10.1016/[Link].2015.11.004
Seele, P., Dierksmeier, C., Hofstetter, R., & Schultz, M. D. (2019). Mapping
the Ethicality of Algorithmic Pricing: A review of Dynamic and Persona-
lized Pricing. Journal of Business Ethics, 170(4), 697–719.
Shiller, BR (2016). Personalized price discrimination using big data. Brandeis
university working paper series, 108, 1–39.
Srinivasan, S., Pauwels, K.. & Nijs, V. (2008), Demand-based pricing versus
past-price dependence: A cost–benefit analysis. Journal of Marketing, 72
(2), 15–27.
Steinberg, E. (2020). Big data and personalized pricing. Business Ethics Quarter-
ly, 30(1), 97–117.
Thibaut, J.W. and L. Walker (1975). Procedural Justice: A Psychological Analysis.
Hillsdale, NJ: Lawrence Erlbaum Associates
Turow, J., Feldman, L., & Meltzer, K. (2005). Open to Exploitation: America’s
Shoppers Online and Offline. A Report from the Annenberg Public Policy
Center of the University of Pennsylvania, Retrieved from http:// repos itory.
upenn. edu/ asc_ papers/ 35.
186 | Yapay Zekâ ve Fiyatlandırma Stratejilerinin Adaletsizliği

Turow, J. 2017. The aisles have eyes: How retailers track your shopping, strip your
privacy, and define your power. New Haven: Yale University Press.
Vandervoort, O. (2024). Amazon Go stores: How the ‘Just walk out’ cashier-
less tech works
[Link]
work/ (Access date 17.03.2025)
van Boom, W. H., van der Rest, J. P. I., van den Bos, K., & Dechesne, M.
(2020). Consumers beware: Online personalized pricing in action! How
the framing of a mandated discriminatory pricing disclosure influences
intention to purchase. Social Justice Research, 33(3), 331–351. [Link]
org/10.1007/s11211-020-00348-7
Wang, Xin, Li Xi and Kopalle Praveen K. (2022), “When does it pay to invest
in pricing algorithms?,” Production and Operations Management.
Weiner, B. (1985). An Attributional Theory of Achievement Motivation and
Emotion, Psychological Review, 92 (4), 548–73.
Wong, T. (2018). China’s retailers turn to real-world surveillance to track big
spenders. Wired. [Link]
(Access date: 17.03.2025)
Xia, L., Monroe, K. B., & Cox, J. L. (2004). The price is unfair! A conceptu-
al framework of price fairness perceptions. Journal of Marketing, 68(4),
1–15. [Link] 10.1509/jmkg.68.4.1.42733
Ying,T., Zhou, B., Ye, S. , Shihan (David) M.&Tan, X. (2024). Oops, the pri-
ce changed! Examining tourists’ attribution patterns and blame towards
pricing dynamics. Tourism Management, 103, 104890
Zuiderveen Borgesius, F., & Poort, J. (2017). Online price discrimination and
EU data privacy law. Jounral of Consumer Policy, 40, 347–366.
Bölüm 11

Fake Reviews and Ratings Undermining


Consumer Trust

Haydar Özaydın1

Abstract
With Internet technologies and e-commerce systems, consumers can
access information about products, services and brands. One of the most
effective tools they can access is the ability to access and write reviews about
the products and services they have purchased. These reviews and ratings
in various online channels such as social media, e-commerce websites or
online evaluation platforms have become important in directing consumers’
purchasing processes and feelings of trust and can significantly affect
consumer decisions. However, with the development of artificial intelligence
technologies, the creation of fake reviews has become easier and widespread.
Therefore, the prevalence of reviews manipulated to influence consumers
may cause scepticism and distrust towards online platforms. This situation
negatively affects consumer trust and creates information asymmetry between
e-commerce platforms, businesses and consumers. Fake reviews and ratings
create doubts about the accuracy and reliability of the information provided
about the product/service. Online reviews and ratings cease to be a real source
of business feedback. In addition, the research also discusses the contribution
of artificial intelligence tools to avoid and detect the adverse effects of fake
reviews and ratings. This research aims to examine the effects of fake reviews
and ratings on consumer trust and evaluate how fake reviews and ratings are
produced, their characteristics, and the research on detecting fake reviews.

1 Assistant Professor, Bolu Abant İzzet Baysal Üniveritesi, [Link]@[Link],


0000-0003-0274-7143

[Link]
187
188 | Fake Reviews and Ratings Undermining Consumer Trust

Introduction
With digital processes, consumer comments and feedback have become
an important reference tool in consumers’ purchasing decisions today.
Although there are genuine consumer reviews and feedback in digital media,
consumers’ purchasing decisions can be manipulated by making fake reviews
or ratings with artificial intelligence. As technology continues to develop, the
ability of artificial intelligence to manipulate digital media is also improving.
It is becoming increasingly difficult to distinguish what is real and what is
artificially produced or manipulated, and it can pose significant challenges
for consumers to make informed decisions based on accurate information.
With the help of artificial intelligence, it is possible to produce fake reviews
and ratings that appear to come from real users and to lead others to mislead
and trust false information. Unfortunately, with the rise of technology
comes the possibility of manipulation, and AI tools enable sophisticated
algorithms that can generate fake reviews and ratings, convincing consumers
that a product or service is better than it is.
These situations weaken consumers’ trust and thus negatively affect
their online shopping experience. When a shopping experience based on
fake reviews or ratings results in a negative shopping experience, it creates
dissatisfied consumers. Such manipulations can lead to processes that can
threaten not only individual consumers’ experiences but also businesses’
reputation. Therefore, fake reviews and ratings prevent consumers from
accessing accurate information about products and services, increasing
information asymmetry and disrupting market order (Malbon, 2013).
The reliability of these reviews, which affect consumers’ online shopping
decisions, is important for both businesses and consumers. As Mathews
Hunt, (2015) states in his study, online reviews play an important role in
consumers’ evaluations of products and services. However, the increase in
fake reviews and the involvement of artificial intelligence in this process
question the reliability of these reviews and weaken consumers’ trust in
online platforms. In this context, detecting and preventing fake reviews is
critical for consumer protection and market order (Mohawesh et al., 2021).
The effects of fake reviews on consumer trust have become an important
research topic in digital marketing and e-commerce. Online consumer
reviews are among the critical elements influencing consumers’ purchasing
decisions. However, the increase in fake reviews can damage consumer trust,
which needs to be measured and analysed. In order to rebuild consumer
trust, it is considered important to understand the effects of fake reviews
and to take adequate measures. In this study, online consumer reviews and
Haydar Özaydın | 189

fake reviews are defined first. The concepts of fake reviews and consumer
trust are expressed, and finally the studies in the literature on detecting fake
reviews are discussed.

1. Online Consumer Reviews and Fake Reviews


Online consumer reviews can be expressed as user-generated opinions
posted online by purchasers of products or services (Ma & Lee, 2014). The
components of online consumer reviews include an overall star rating, clearly
stated pros and cons, and free text comments. The consistency of these
components provides an important data source for product assessments
(Schindler & Decker, 2013). With the development of web technology and
e-commerce, consumers increasingly rely on online reviews before making
purchasing decisions (Song, Wang, Zhang, & Hikkerova, 2023). The first
indicator in the consumer decision-making process is usually ratings, which
indicate the user evaluation of a product and are expressed as asterisks.
Ratings are applications that can deal with large amounts of information, are
easy to process, and help identify selection criteria. Stars or scores, indicators
of ratings, are effective because they are easily accessible information when
selecting a product (Karaca & Gümüş, 2020). Therefore, online consumer
reviews can be defined as any positive, negative or neutral comment, rating,
ranking, assumed to be made by a former customer about a product, service,
brand or person and shared with other consumers in an unstructured format
such as a blog post or on an independent consumer review website (Filieri,
2016).
Today’s consumers essentially see online consumer comments as a form
of eWOM (electronic word-of-mouth) in the online and offline product
purchase decision process. Electronic word-of-mouth can be defined as all
positive or negative comments about the business, product or service on
online platforms and all kinds of communication based on them. Compared
to traditional word-of-mouth communication, online reviews and ratings
have the potential to reach more people through the internet (Fong,
2010). Positive reviews can lead the consumer to purchase the product,
while negative reviews can cause them to change their purchase decision.
Thus, positive reviews result in significant product sales, financial gains or
reputation for businesses and individuals. Online consumer reviews enable
people to obtain detailed information with high credibility and reputation
compared to information marketers provide (Akdeniz & Özbölük, 2019;
Park & Nicolau, 2015). These advantages of consumer reviews can be
an opportunity for many malicious practices (Algur, Patil, Hiremath, &
Shivashankar, 2010).
190 | Fake Reviews and Ratings Undermining Consumer Trust

Spam, fake, misleading and even fraudulent online reviews are rapidly
growing and becoming widespread on the internet (Zhang, Du, Yoshida,
& Wang, 2018). Fake reviews are manipulative attempts to manipulate
consumers into thinking more favourably about a product or service than
they do and to influence their purchasing decisions (Costa Filho, Nogueira
Rafael, Salmonson Guimarães Barros, & Mesquita, 2023). From a business
perspective, the purpose of online review manipulation is to strengthen
the business’s online reputation, attract consumers’ attention and increase
their tendency to purchase from the business (Sop, Atasoy, & Günaydin,
2024). Fake reviews are inconsistent with authentic reviews of products or
services, so fake reviews are false and deceptive. They are deceptive reviews,
often provided by reviewers with little or no experience of the products
or services being reviewed, with the aim of misleading consumers in their
purchasing decisions. The defining characteristic of fake reviews is whether
they mislead consumers (Wu, Ngai, Wu, & Wu, 2020; Zhang, Zhou,
Kehoe, & Kilic, 2016). The aim of fake reviewers, or deceivers in general,
is to deceive others while trying to avoid detection. Motivated by financial
gains or other benefits, fake reviewers can continuously improve themselves
through previous experience to increase their chances of success (Zhang et
al., 2016). Fake reviews reduce informativeness, information quality and the
effective use of online product reviews. They can also damage the credibility
of reviews, negatively impacting the benefits that reviews can provide.
However, since consumers have little knowledge of who the reviewers are,
it is normal for them to distrust both online platforms and reviews. The
trustworthiness of the reviewer is important in consumers’ perceptions of
trust in online reviews and ratings (Evans, Stavrova, & Rosenbusch, 2021).
In addition, fake reviews seriously negatively impact the development
of online product reviews and create information asymmetry between
merchants and customers. Online sellers may create positive fake reviews
for their products or negative fake reviews for their competitors’ products
for financial gain (Sahut, Laroche, & Braune, 2024). The right marketplace
for reviews also benefits companies, as they can get real customer feedback
that can be analysed to improve products and services (Salminen, Kandpal,
Kamel, Jung, & Jansen, 2022).
There are two types of fake comments. The first is fake reviews created by
humans, and the second is computer-based fake reviews. Various methods
of creating fake reviews can be expressed as follows (Ross, 2020; Salminen
et al., 2022). Firms can use paid review services to share fake reviews for
their products and services. These paid reviews are mostly seen on digital
platforms like Google, Yelp or Amazon. Creating fake reviews through tools
Haydar Özaydın | 191

such as artificial intelligence and bots is also possible. Fake reviews can be
made cheaper than paid reviews, especially with tools such as text natural
language processing and machine learning. Finally, fake accounts can create
negative comments on competitors’ products and services and positive
comments on their products and services. Other more complex forms of
online review manipulation also exist, including ranking information
provided to consumers by search engines. A wide range of manipulations
of the loading time of web pages, their design and the way they present
information are also among the conscious activities to which consumers are
unwittingly exposed (Malbon, 2013).

2. Consumer Trust and Fake Reviews


Consumer trust underpins the long-term commercial relationship between
seller and buyer. At its core, trust is about believing, trusting or having faith
in an organisation, its staff and its services. It helps reduce perceived risk
and is a valuable component of a business strategy as it positively influences
buyers’ purchasing decision by generating word-of-mouth communication
(Bauman & Bachmann, 2017; Flanagan, Johnston, & Talbot, 2005). Trust
is an important factor in the influence of online reviews and ratings (Fong,
2010). Online reviews and ratings have become an important and trusted
source of information for consumers’ decision-making processes (Evans et
al., 2021).
Zhang, Chen, & Sun, (2010) found that sellers’ reputation, information
openness, and online consumer reviews positively affect consumer trust.
This study emphasises that high-quality online reviews increase the seller’s
reputation and thus reinforce consumer trust. Utz, Kerkhof, & Van Den
Bos, (2012) examined the effect of online store reviews on consumer trust.
The study results show that consumer reviews are an important element
in evaluating the trustworthiness of online stores. The authors state that
consumer reviews are a more effective determinant of trust than store
reputation. Lee, Park, & Han, (2011) found that the effect of online consumer
reviews increases with higher trust in online shopping sites. In addition,
the authors stated that online consumer reviews made by independent users
affect consumers ‘purchase intention more than consumer reviews directly
integrated into sellers’ advertisements. The quality and number of online
reviews are important factors affecting consumer trust. A study by Zeng,
Cao, Lin, & Xiao (2020) examined the relationship between the quality of
online reviews and consumer behaviour. The research shows that the quality
of online reviews directly impacts consumer intentions.
192 | Fake Reviews and Ratings Undermining Consumer Trust

However, fake reviews and ratings have a negative impact on consumers’


purchase decision processes and their sense of trust (Costa Filho et al.,
2023). Wu & Qiu, (2016) state that low-quality sellers tend to write more
fake reviews than high-quality sellers. This situation makes it difficult for
consumers to evaluate the quality of the genuine product, thus weakening
the sense of trust. However, Song et al., (2023) also stated that fake reviews
for products with high brand awareness do not affect consumers’ purchase
intention. However, as the fake review rates of products with low brand
awareness increase, consumers’ purchase intention decreases.
He, Hollenbeck, & Proserpio, (2022) found in their research on the
Amazon that fake reviews are purchased for products with low reviews, low
ratings or new products that have just been released to the market. They
stated that Amazon detects and deletes the reviews with the measures they
take against fake reviews and ratings, but short-term unfair advantages are
obtained due to the long duration of this process. The authors stated that
fake reviews and ratings should not be perceived as an advertising activity
but as a manipulation tool that damages consumers’ trust.
Sop et al. (2024) reported that hotel managers manipulate negative
reviews about their businesses by intervening with various methods. The
authors stated that managers resort to various unethical service compensation
methods, including having staff make comments as if they were customers,
to prevent consumers from making negative reviews about the hotel and its
services.
Fake reviews threaten the credibility of marketing and e-commerce.
Fake reviews negatively affect consumer trust in online reviews, which can
negatively affect the market order. Fake reviews can positively or negatively
affect the ranking of products. The impact of fake reviews is not only limited
to loss of reputation, but also has the potential to bring financial losses
(Salminen et al., 2022). Measuring and analysing the effects of fake reviews
on consumer trust requires a multidimensional approach. The research
studies above provide different perspectives necessary to understand the
adverse effects of fake reviews on consumer trust. The following section
presents studies on detecting fake reviews and evaluations.

3. Fake Reviews Detection


Fake reviews are also known as deceptive opinions, spam reviews, while
their authors are called spammers. They can cause financial loss for product
manufacturers and service providers, as negative fake reviews can damage
their brand reputation (Cardoso, Silva, & Almeida, 2018).
Haydar Özaydın | 193

Some important features of fake reviews are the following (Alsubari et


al., 2021; Hussain, Turab Mirza, Hussain, Iqbal, & Memon, 2020):
• Insufficient information about the reviewer: People who interact
little in the relevant channel or comment without profile information
are defined as fake reviewers.
• Similar review content: Fake reviewers often share similar reviews
on the relevant channels.
• Short Reviews: Since fake reviewers are interested in fast returns,
they share short reviews with spelling and grammar mistakes.
• Sharing reviews at similar times: To identify fake reviews, look
at the time when they were shared. Fake reviews can sometimes be
posted collectively at the same time.
• Exaggerated reviews: Fake reviewers often use overly positive and
negative statements.
Hassan & Islam, (2021) researched using a sentiment analysis-based
model to detect fake reviews online. According to the results obtained, they
observed that online fake reviews are either positive or negative at extremes.
In order to attract the attention of consumers, words that represent extreme
emotions, such as exclamation marks, great, excellent, or terrible, awful, are
often used (Banerjee & Chua, 2023).
In Moon, Kim, & Iacobucci, (2021) study, linguistic factors were
identified by using word patterns to distinguish between fake and real reviews
for hotel services. Their research stated that fake reviews have features such
as lack of detail, use of present and future tense-oriented language, and
emotional exaggeration.
Plotkina, Munzel, & Pallud, (2020) created two separate data pools
consisting of fake and real reviews in their research with 1041 people and
found that the micro-linguistic automatic detection tool detected fake
reviews with 81% accuracy, while the detection rate of humans was only
57%. They noted that this rate remained the same even when fake reviews
were given clues to be recognised. The authors emphasised the need for
more advanced filtering methods for online consumer review.
Salminen et al. (2022) stated in their research that fake reviews generated
by artificial intelligence tools can be detected with very high accuracy using
artificial intelligence tools in the same way. They also stated that artificial
intelligence can effectively detect not only reviews generated by artificial
intelligence tools, but also fake reviews written by humans.
194 | Fake Reviews and Ratings Undermining Consumer Trust

Conclusions
The effects of fake reviews and evaluations on consumer trust in online
channels have become an important research topic. This effect is not only
limited to individual consumers but also damages the reputation of businesses.
Since fake reviews and evaluations do not reflect consumer experiences, they
negatively affect consumers’ perception of reliability towards businesses
and reviews. One of the main reasons why fake reviews significantly affect
consumer trust is that they mislead potential buyers by providing false
information about a product or service. These reviews do not reflect real
experiences and may exaggerate certain features to manipulate consumer
perception. Fake reviews also negatively affect the trust in the review system
itself. Consumers begin to question the accuracy and reliability of all reviews
and ratings, whether real or fake, making it difficult to make informed
decisions. This lack of trust can have a negative impact on businesses as they
may see a decline in sales and revenue due to sceptical consumers.
These artificially generated reviews and ratings are designed to mimic
real consumer sentiments, making it difficult for consumers to detect fake
reviews. Indeed, Costa Filho et al., (2023) found that fake reviews are much
more likely to go unnoticed by consumers if they are not equipped with
the tools to detect them. This manipulation can have serious consequences
for consumers, who may purchase low-quality products or services, and
businesses, which may experience a loss of trust and reputation.
In addition to AI-generated reviews, there are also cases where businesses
hire people or agencies to write fake reviews to boost their ratings and
attract more customers (Malbon, 2013). These unethical practices deceive
consumers and go against fair competition between businesses. Although
policymakers and regulators have started to actively address the issue of fake
reviews, legal actions against deceivers are complex and challenging due to
the inability to identify and identify perpetrators. Therefore, consumers and
review platforms should consider taking active steps to filter out deceptive
online reviews. For this, both review platforms and individuals should be
able to detect opinion spam (Plotkina et al., 2020). Negative fake reviews
can damage a company’s image and tarnish their brand image, making it
difficult for them to attract or retain new customers. Therefore, companies
must actively monitor and address fake reviews to protect consumer trust
and reputation (Fong, 2010). Negative comments are critical for businesses
to identify their weak points and see them as opportunities for improvement.
They can continuously improve their business processes by recognising
this feedback as a valuable learning opportunity (Öztürk, 2024). On the
Haydar Özaydın | 195

other hand, consumers should be cautious when relying on online reviews


and only make informed decisions after thorough research and evaluation.
In this digital age where information is easily accessible, businesses and
consumers need to be vigilant and act responsibly to combat the problem of
fake reviews. In conclusion, while digital channels offer new opportunities
for consumers to make informed purchasing decisions, they also pose
challenges with the increasing presence of manipulated reviews. Businesses
should prioritise ethical practices and transparency in online marketing to
maintain consumer trust and the integrity of digital platforms.
196 | Fake Reviews and Ratings Undermining Consumer Trust

References
Akdeniz, P. C., & Özbölük, T. (2019). Online Yorumların Tüketici Satın Alma
Kararına Etkisi: Kullanıcı Özellikleri Açısından Bir Değerlendirme. İşlet-
me Araştırmaları Dergisi, 11(4), 3104–3119.
Algur, S. P., Patil, A. P., Hiremath, P. S., & Shivashankar, S. (2010). Concep-
tual level similarity measure based review spam detection. 2010 Interna-
tional Conference on Signal and Image Processing, 416–423. [Link]
org/10.1109/ICSIP.2010.5697509
Alsubari, S., Deshmukh, S., Alqarni, A., Alsharif, N., H, T., Alsaade, F., &
Khalaf, O. (2021). Data Analytics for the Identification of Fake Reviews
Using Supervised Learning. Computers, Materials & Continua, 70(2),
3189–3204. [Link]
Banerjee, S., & Chua, A. Y. K. (2023). Understanding online fake review pro-
duction strategies. Journal of Business Research, 156, 113534. [Link]
org/10.1016/[Link].2022.113534
Bauman, A., & Bachmann, R. (2017). Online Consumer Trust: Trends in Re-
search. Journal of Technology Management & Innovation, 12(2), 68–79.
[Link]
Cardoso, E. F., Silva, R. M., & Almeida, T. A. (2018). Towards automatic
filtering of fake reviews. Neurocomputing, 309, 106–116. [Link]
g/10.1016/[Link].2018.04.074
Costa Filho, M., Nogueira Rafael, D., Salmonson Guimarães Barros, L., &
Mesquita, E. (2023). Mind the fake reviews! Protecting consumers from
deception through persuasion knowledge acquisition. Journal of Business
Research, 156, 113538. [Link]
Evans, A. M., Stavrova, O., & Rosenbusch, H. (2021). Expressions of dou-
bt and trust in online user reviews. Computers in Human Behavior, 114,
106556. [Link]
Filieri, R. (2016). What makes an online consumer review trustworthy?
Annals of Tourism Research, 58, 46–64. [Link]
annals.2015.12.019
Flanagan, P., Johnston, R., & Talbot, D. (2005). Customer confidence:
The development of a “pre‐experience” concept. International Jour-
nal of Service Industry Management, 16(4), 373–384. [Link]
org/10.1108/09564230510614013
Fong, A. (2010). The influence of online reviews: Case study of TripAdvisor
and the effect of fake reviews. Journal of Digital Research and Publishing,
6, 106–113.
Hassan, R., & Islam, Md. R. (2021). Impact of Sentiment Analysis in Fake
Online Review Detection. 2021 International Conference on Information
Haydar Özaydın | 197

and Communication Technology for Sustainable Development (ICICT4SD),


21–24. [Link]
He, S., Hollenbeck, B., & Proserpio, D. (2022). The Market for Fake Reviews.
Marketing Science. (world). [Link]
Hussain, N., Turab Mirza, H., Hussain, I., Iqbal, F., & Memon, I. (2020).
Spam Review Detection Using the Linguistic and Spammer Behavio-
ral Methods. IEEE Access, 8, 53801–53816. [Link]
ACCESS.2020.2979226
Karaca, Ş., & Gümüş, N. (2020). Tüketicilerin Online Yorum Ve Değerlendir-
me Puanlarina Yönelik Tutumlarinin Online Satin Alma Davranişlarina
Etkisi. Sakarya İktisat Dergisi, 9(1), 52–69.
Lee, J., Park, D., & Han, I. (2011). The different effects of online consumer
reviews on consumers’ purchase intentions depending on trust in on-
line shopping malls. Internet Research, 21(2), 187–206. [Link]
org/10.1108/10662241111123766
Ma, Y. J., & Lee, H.-H. (2014). Consumer responses toward online review ma-
nipulation. Journal of Research in Interactive Marketing, 8(3), 224–244.
(world). [Link]
Malbon, J. (2013). Taking Fake Online Consumer Reviews Seriously. Jour-
nal of Consumer Policy, 36(2), 139–157. [Link]
s10603-012-9216-7
Mathews Hunt, K. (2015). Gaming the system: Fake online reviews v. con-
sumer law. Computer Law & Security Review, 31(1), 3–25. [Link]
org/10.1016/[Link].2014.11.003
Mohawesh, R., Xu, S., Tran, S. N., Ollington, R., Springer, M., Jararweh, Y., &
Maqsood, S. (2021). Fake Reviews Detection: A Survey. IEEE Access, 9,
65771–65802. [Link]
Moon, S., Kim, M.-Y., & Iacobucci, D. (2021). Content analysis of fake con-
sumer reviews by survey-based text categorization. International Journal
of Research in Marketing, 38(2), 343–364. [Link]
ijresmar.2020.08.001
Öztürk, İ. (2024). Bodrum Da Faaliyet Gösteren 5 Yildizli Konaklama İşlet-
melerinde Çevrimiçi Tüketici Yorumlarinin Duygu Analizİ. Journal of
Gastronomy Hospitality and Travel (JOGHAT), 7(2), 397–405. https://
[Link]/10.33083/joghat.2024.409
Park, S., & Nicolau, J. L. (2015). Asymmetric effects of online consumer re-
views. Annals of Tourism Research, 50, 67–83. [Link]
annals.2014.10.007
Plotkina, D., Munzel, A., & Pallud, J. (2020). Illusions of truth—Experimental
insights into human and algorithmic detections of fake online reviews.
198 | Fake Reviews and Ratings Undermining Consumer Trust

Journal of Business Research, 109, 511–523. [Link]


jbusres.2018.12.009
Ross, L. (2020, May 31). Sahte İncelemelerin Durumu – İstatistikler ve Trend-
ler [2025]—Invesp. Retrieved 1 March 2025, from [Link]
[Link]/blog/fake-reviews-statistics/
Sahut, J. M., Laroche, M., & Braune, E. (2024). Antecedents and consequen-
ces of fake reviews in a marketing approach: An overview and synthe-
sis. Journal of Business Research, 175, 114572. [Link]
jbusres.2024.114572
Salminen, J., Kandpal, C., Kamel, A. M., Jung, S., & Jansen, B. J. (2022).
Creating and detecting fake reviews of online products. Journal of Re-
tailing and Consumer Services, 64, 102771. [Link]
jretconser.2021.102771
Schindler, D., & Decker, R. (2013). Some Remarks on the Internal Consisten-
cy of Online Consumer Reviews. Australasian Marketing Journal, 21(4),
221–227. [Link]
Song, Y., Wang, L., Zhang, Z., & Hikkerova, L. (2023). Do fake reviews pro-
mote consumers’ purchase intention? Journal of Business Research, 164,
113971. [Link]
Sop, S. A., Atasoy, F., & Günaydin, Y. (2024). Resort Otellerde Çevrim İçi
Yorum Manipülasyonu. GSI Journals Serie A: Advancements in Tourism
Recreation and Sports Sciences, 7(1), 16–31. [Link]
atrss.1302316
Utz, S., Kerkhof, P., & van den Bos, J. (2012). Consumers rule: How consu-
mer reviews influence perceived trustworthiness of online stores. Elect-
ronic Commerce Research and Applications, 11(1), 49–58. [Link]
g/10.1016/[Link].2011.07.010
Wu, Y., Ngai, E. W. T., Wu, P., & Wu, C. (2020). Fake online reviews: Litera-
ture review, synthesis, and directions for future research. Decision Support
Systems, 132, 113280. [Link]
Zeng, G., Cao, X., Lin, Z., & Xiao, S. H. (2020). When online reviews meet
virtual reality: Effects on consumer hotel booking. Annals of Tourism
Research, 81, 102860. [Link]
Zhang, D., Zhou, L., Kehoe, J. L., & Kilic, I. Y. (2016). What Online Re-
viewer Behaviors Really Matter? Effects of Verbal and Nonverbal Be-
haviors on Detection of Fake Online Reviews. Journal of Management
Information Systems, 33(2), 456–481. [Link]
222.2016.1205907
Zhang, H., Chen, D., & Sun, R. (2010). The Study of Consumer Trust in
C2C e-Commerce Based on Reputation Score, Information Disclosu-
Haydar Özaydın | 199

re, Online Consumer Review Quality. 2010 International Conference


on Management of E-Commerce and e-Government, 184–187. https://
[Link]/10.1109/ICMeCG.2010.46
Zhang, W., Du, Y., Yoshida, T., & Wang, Q. (2018). DRI-RCNN: An
approach to deceptive review identification using recurrent convolu-
tional neural network. Information Processing & Management, 54(4),
576–592. [Link]
200 | Fake Reviews and Ratings Undermining Consumer Trust
Chapter 12

Consumer Manipulation With Artificial


Intelligence: Dark Patterns and Hidden
Techniques

Kadir Deligöz1

Abstract
Technological advancements and artificial intelligence (AI)-assisted digital
transformation offer significant opportunities to consumers, while at the
same time paving the way for the development of manipulative design
strategies. Among these strategies, ‘Dark Patterns’ are deceptive UI/UX
(User Interface / User Experience) design techniques that direct users to
perform certain actions without their awareness. Artificial intelligence makes
these techniques more complex, personalized and effective, thus guiding
users’ decision-making processes.
Artificial intelligence-supported Dark Patterns have negative effects
on individual and social levels. These techniques undermine consumer
autonomy, leading to financial losses, privacy violations, and reduced trust
in digital platforms. In terms of social justice, low-income users may be
exposed to more hidden costs and dynamic pricing. Therefore, it is crucial to
adopt ethical design principles, increase user awareness and strengthen legal
regulations. Raising consumer awareness, promoting transparent digital
marketing practices and tightening algorithmic controls by regulatory bodies
will be critical steps in the fight against Dark Patterns.

Introduction
The digitalization process, together with technological developments,
offers unprecedented opportunities to consumers, while at the same time
paving the way for the emergence of new manipulation (manipulation,
influence, deception) techniques One of the most prominent examples
of these techniques is defined as “Dark Patterns”, which are used in user

1 Associate Professor, Atatürk University, [Link]@[Link],


ORCID ID: 0000-0003-3247-9223

[Link]
201
202 | Consumer Manipulation With Artificial Intelligence: Dark Patterns and Hidden Techniques

interface (UI) design and direct users to perform certain actions with
deceptive or manipulative methods Dark patterns are deceptive design
strategies designed to manipulate users on digital platforms These strategies
can cause users to unwittingly sign up for subscriptions, make unwanted
purchases or share personal data Artificial intelligence technologies are
increasingly being used to enhance, personalize and scale the effectiveness
of these patterns This raises important debates about consumer rights and
digital ethics (Ducato & Marique 2018)
In this section, a comprehensive analysis of AI-powered dark patterns
will be presented First, the main types of dark patterns and AI integration
will be examined, and then the ethical and social implications of these
practices will be discussed Legal regulations and technological solutions will
also be discussed in detail, and finally, potential future developments will be
evaluated This study aims to take an in-depth look at what AI-driven dark
patterns are, how they work, their impact on consumers and the measures
that can be taken against them

1. The Concept of Dark Patterns


Today, as digital transformation accelerates, consumers are increasingly
engaging with online platforms and digital services. These interactions are
largely realized through interface designs that shape the user experience.
However, it is observed that these designs do not always serve for the benefit
of the user, on the contrary, they can be used for manipulative purposes.
These manipulative design patterns, called “dark patterns” or “dark patterns”
in Turkish, are defined as insidious tactics that direct users to make decisions
against their own interests (Brignull & Darlo, 2019).
Dark patterns are deceptive elements that are intentionally crafted to make
the users do actions that they wouldn’t do otherwise. Those techniques are
used for the benefit of various stakeholders and are included in web products
that are used world-wide, such as social media platforms, some popular apps
or web services. The concept is well known among practitioners (Cara,
2019: 105). With the development and proliferation of artificial intelligence
technologies, the effectiveness and sophistication of dark patterns has also
increased significantly. Machine learning algorithms are able to develop
personalized manipulation strategies by analyzing user behavior, thereby
guiding consumers more effectively (Zhang et al., 2021). Dark patterns
tactics are user interfaces that benefit an online service by leading consumers
to make irrational decisions they might not otherwise make (Narayanan et
al., 2020) or tricking or manipulating consumers into purchasing products
Kadir Deligöz | 203

or services (Federal Trade Commission, 2022). This raises serious concerns


about digital ethics and consumer rights, and calls for new regulations in
this area.

2. Historical Development of Dark Patterns


Dark patterns are defined as manipulative design strategies that direct
users to perform certain actions. This concept was first introduced in 2010
by User Experience designer Harry Brignull, who drew attention to the
ethical risks of such design patterns (Brignull, 2010). Brignull systematically
categorized dark patterns and emphasized the aspects of these practices that
negatively affect the user experience. Therefore, Brignull’s contribution to
the concept is important.
With the development of digital platforms, the use of dark patterns
has become more sophisticated and widespread, especially in electronic
commerce, social media and mobile applications. In this process, machine
learning and artificial intelligence-supported algorithms analyze user
behavior to develop personalized manipulation strategies and increase the
impact of dark patterns.
Understanding the history of dark patterns helps us better understand
how these techniques have evolved and their impact on user experience. Table
1 below provides a detailed overview of the development and milestones of
dark patterns.
204 | Consumer Manipulation With Artificial Intelligence: Dark Patterns and Hidden Techniques

Table 1. Historical Process of Dark Patterns


Year Milestone Description
Early e-commerce platforms, such as Amazon and
The Emergence eBay, introduced basic manipulative techniques,
1990s of Digital including targeted product placements, dynamic
Manipulation pricing, and early-stage pop-up ads (Nielsen, 1994;
Wilson, 1997).
Websites began implementing complex, lengthy
Privacy Policy
privacy policies that obscured data collection
Complexification
2000s practices, making it easier for users to give implicit
& Hidden Consent
consent (Cranor, 2000). Subscription traps and forced
Strategies
continuity techniques also became more prevalent.
UX researcher Harry Brignull coined the term "Dark
Introduction of
Patterns" and categorized deceptive UI/UX tactics
2010 the Term "Dark
on his website [Link] (now [Link])
Patterns"
(Brignull, 2010).
Expansion of Dark Social media algorithms and mobile apps began
2012- Patterns in Social integrating dark patterns, such as manipulative
2015 Media & Mobile notifications, in-app purchase traps, and personalized
Apps engagement techniques (Gray et al., 2018).
The rise of artificial intelligence enabled highly
AI-Driven
personalized dark patterns. Machine learning
2016- Dark Patterns
algorithms began predicting user behavior, optimizing
2019 & Algorithmic
engagement tactics, and reinforcing compulsive digital
Manipulation
habits (Anderson et al., 2020).
Regulatory
The European Union's General Data Protection
Intervention:
Regulation (GDPR) took effect, addressing dark
2018 GDPR &
patterns related to privacy, data transparency, and user
Consumer
consent (European Commission, 2018).
Protection Debates
Growing awareness of AI-driven manipulation
Ethical Design, prompted legal and ethical discussions on banning
Legal Reforms & deceptive practices. Countries and regulatory bodies
2020s
AI Transparency introduced laws against dark patterns in digital
Debates marketing (Federal Trade Commission, 2022;
Zuiderveen Borgesius, 2018).
Ethical design principles and AI transparency
Future Directions:
guidelines emerged to counteract dark patterns.
2023 & AI Ethics &
Researchers and policymakers continue to advocate
Beyond Transparent User
for fair, user-centric digital environments (Weinberg,
Experience
2018).

The development of dark patterns started with the spread of the internet
and the birth of e-commerce platforms, and became more complex with
the development of digital marketing techniques. Combined with artificial
intelligence and big data analytics, algorithmic manipulation techniques
Kadir Deligöz | 205

have become increasingly effective. Important milestones in this process can


be summarized as follows;
1990s: First Traces of Techniques - The Beginning of Digital Manipulation
With the widespread use of the Internet, online commerce platforms
have started to develop strategies to influence users’ purchasing behavior.
Pioneering e-commerce sites such as Amazon and eBay have transferred
product placement, price display and promotion techniques used in
traditional retailing to the digital environment (Nielsen, 1994). The
manipulation techniques that emerged in this period are as follows:
• Ensuring that users see specific products with product placement
algorithms,
• Offering different prices to different user groups with dynamic pricing
strategies,
• The first pop-up ads were developed to direct users’ attention to
specific actions (Wilson, 1997).
2000s: Complexification of Privacy Policies and Commercial Use of User
Data
With the rapid spread of the Internet, users’ data privacy has become an
increasingly big issue. Websites and digital service providers have developed
complex privacy policies that allow users to give unconscious consent to
collect personal data (Cranor, 2000). In this period;
• User agreements and privacy policies were made long and complex,
allowing users to give consent without careful reading.
• “Forced continuity” and subscription traps have been developed
to make it easier for users to sign up for services while making the
cancellation process more difficult.
2010: Emergence and Systematization of Dark Patterns
Harry Brignull introduced the concept of dark patterns into the literature
and began to systematically analyze such manipulative design strategies. He
established the website [Link] (now updated as [Link]
[Link]/ ) and contributed to raising awareness (Brignull, 2010).
During this period, common examples of dark patterns include:
- “Roach Motel” strategies are methods that allow users to subscribe
easily and make it difficult to unsubscribe,
206 | Consumer Manipulation With Artificial Intelligence: Dark Patterns and Hidden Techniques

- “Privacy Zudging” techniques, interface designs that encourage users


to share more data,
- “Social Proof” manipulations are strategies that encourage users to
imitate the behavior of others.
2012-2015: Spread of Dark Patterns and the New Era of Digital
Marketing
The rapid growth of social media platforms and mobile applications has
enabled dark patterns to reach a wider user base. In particular, personalized
recommendation systems and algorithms have been used more effectively to
direct users to specific content (Gray et al., 2018). During this period;
- Dark patterns became widespread in mobile apps (e.g., in-app purchase
traps).
- Social media algorithms have developed manipulative strategies to
ensure that users are exposed to certain content.
2016-2019: Artificial Intelligence Assisted Algorithmic Manipulation
Machine learning and artificial intelligence have led to more advanced
techniques for predicting and guiding user behavior. Algorithms that
analyze user data have started to create personalized dark patterns strategies
at the individual level (Anderson et al., 2020). The prominent developments
in this period are as follows;
- Dynamic pricing strategies started to be optimized according to
individual purchase history.
- Algorithmic content recommendations included guidance mechanisms
that encouraged users to spend more time.
2020 and Beyond: Legal Regulations and Ethical Debates
The proliferation of dark patterns has necessitated the development of
legal regulations to protect user rights. Regulations such as the European
Union General Data Protection Regulation (GDPR) and the California
Consumer Privacy Act (CCPA) aim to limit dark patterns (Zuiderveen
Borgesius, 2018).Today:
- Ethical design and transparency requirements are on the agenda.
- Laws have been developed to protect consumer rights.
- User awareness has increased and platforms containing dark patterns
have been criticized.
Kadir Deligöz | 207

Dark patterns have become more complex with the development of


the internet and digital commerce, and have evolved into personalized
manipulation techniques with artificial intelligence-supported algorithms.
Although legal regulations and ethical design principles aim to limit these
manipulations, the increasing sophistication of artificial intelligence leads to
the emergence of new types of dark patterns.

3. Types of Dark Patterns and Artificial Intelligence Integration


Dark patterns are manipulative design strategies that direct users to
perform certain actions without their awareness. These strategies usually
target the weak points of human psychology and aim to prevent conscious
decision-making processes. Today, artificial intelligence is being used to make
these manipulative techniques more effective, analyzing user behavior and
developing tailored manipulation strategies. Below, we examine common
types of dark patterns in the literature and how AI optimizes these patterns.

3.1. Privacy Zudging


Privacy undermining is a type of dark pattern that relies on encouraging
individuals to share their personal data or manipulating them into
unconsciously giving up their privacy rights. This strategy involves design
techniques and persuasion methods that induce users to unwittingly share
more data. Social media platforms, e-commerce sites, and mobile apps use
methods such as making default privacy settings less protective, presenting
permission requests in ambiguous language, or deliberately complicating the
process of changing privacy settings to carry out this manipulation (Böhm,
2018).
Artificial intelligence uses various algorithms to make privacy mitigation
techniques more effective and personalized. Behavioral analysis, timing
optimization, and personalized persuasion strategies are among the
frequently used methods in AI-assisted privacy manipulation.
- Behavioral Analysis: By analyzing users’ privacy preferences, previous
decisions, and sensitivities, AI algorithms can determine which privacy
settings certain individuals are more sensitive to. This analysis enables the
creation of personalized manipulative strategies.
- Personalized Persuasion: Customized privacy policies or persuasive
messages are offered based on the user’s profile and past interactions. For
example, a social media platform may display tailored and emotionally-
charged alerts to encourage a user to change their privacy settings.
208 | Consumer Manipulation With Artificial Intelligence: Dark Patterns and Hidden Techniques

- Timing Optimization: Artificial intelligence analyzes user behavior to


determine the optimal moment to ask for privacy permissions. For example,
privacy consent requests can be shown when the user is busy or making a
purchase, increasing the likelihood that the user will accept without paying
attention to the details.
Facebook’s facial recognition feature can be examined from the perspective
of privacy undermining. The messages Facebook uses to enable facial
recognition are considered an example of AI-assisted privacy undermining.
For example, users are presented with a statement such as “Your friends can
tag you more easily in photos”, aiming to minimize privacy concerns. Such
messages encourage individuals to share more data by making the process of
changing privacy settings seem user-friendly and advantageous (Waldman,
2020).
As a result, privacy undermining is an ethically controversial manipulation
strategy that can lead users to unwittingly violate their right to protect their
data. The role of artificial intelligence in this process is to analyze individuals’
personal preferences to persuade them at the most opportune moments and
develop customized techniques to induce them to share more data voluntarily.
This raises serious ethical and legal questions in terms of protecting user
privacy and necessitates the tightening of regulatory frameworks.

3.2. Forced Continuity


Coerced sign-up is a type of dark pattern that involves inadvertently
steering users into paid subscriptions and deliberately complicating the
cancellation process. This usually involves initiating automatic payments
after a free trial period, complicating the cancellation process, and developing
dynamic strategies that encourage users to continue their subscription. For
example, a digital publishing platform may automatically charge users after
the trial period expires by requesting their credit card information in advance
and deliberately complicate the cancellation process (Følstad, 2020b).
Artificial intelligence uses advanced data analysis methods to make
forced registration strategies more complex and effective. Techniques such
as predicting user behavior, providing personalized offers, and making the
cancellation process harder for the individual are widely used in AI-assisted
subscription manipulation (Zuboff, 2019).
- Churn Prediction: AI algorithms analyze previous usage habits and
interactions to predict which users tend to unsubscribe. For example, by
identifying users who frequently check settings for unsubscribes or who are
less interested in content, specific interventions can be planned for them.
Kadir Deligöz | 209

- Dynamic Retention Strategies: AI tries to keep users who tend to


unsubscribe by offering them special offers and incentives. For example,
a video streaming platform can show a user who wants to unsubscribe a
message saying “You got a special 50% discount this month!” or try to
keep the user’s interest alive by offering special recommendations based on
content that the user was previously interested in.
- Optimization of the Cancellation Process: Artificial intelligence can
make the subscription cancellation process more complex for the user. For
example, some users may be presented with cancellation processes that
include more steps, while others may be presented with different screens or
distracting surveys such as “Why do you want to cancel your subscription?”.
At times when the user is impatient or tends to make quick decisions,
deliberately lengthening the process may cause them to delay their decision
to cancel (Cara, 2019; Coussement, and Van den Poel, 2006; Gkikas, and
Theodoridis, 2022).
Research shows that AI-assisted forced persistence techniques cause 67%
of users to continue their unwanted subscriptions (Chen et al., 2021). This
shows how effective manipulative subscription strategies are and that users
tend to continue their subscriptions without realizing it. In particular, services
that initiate automatic payments at the end of the trial period, platforms that
require users to contact a customer representative at certain hours to cancel
subscriptions, or user interfaces that hide cancellation buttons are examples
of forced registration techniques.
In digital broadcasting platforms, we can see forced recording manipulation
supported by artificial intelligence. In particular, video streaming platforms
use artificial intelligence to personalize unsubscribe processes and offer
suggestions that encourage users to continue their subscription. When a
user tries to unsubscribe before completing a movie, the system may display
messages such as “Your subscription will remain active until you complete
the content you are watching” as a strategy to delay the cancellation process.
As a result, forced sign-up strategies are an ethically questionable way
of deliberately manipulating the user experience in order to keep people
subscribed. Artificial intelligence makes these processes more effective by
analyzing users’ habits and weaknesses, and creates personalized challenges
for individuals when unsubscribing. This situation raises important debates
in terms of consumer rights, ethical design and digital marketing principles,
and is among the issues that regulatory authorities should carefully address.
210 | Consumer Manipulation With Artificial Intelligence: Dark Patterns and Hidden Techniques

3.3. Social Proof


Social proof is a psychological and social phenomenon wherein people
copy the actions of others in an attempt to undertake behavior in a given
situation (Wikipedia, 2025). The peer pressure technique takes advantage of
consumers’ tendency to follow the decisions of their social circles to encourage
them to take certain actions. For example, an e-commerce platform can
influence users’ purchasing decisions with statements such as “1000 people
have bought this product” or “One of your friends liked this product”.
Artificial intelligence-supported systems analyze individuals’ social networks
and offer personalized content and recommendations, thereby increasing
social pressure and strengthening the tendency to purchase (Cialdini, 2009).
This strategy is commonly applied through the following methods.
- Popularity Indicators: By using phrases such as “1000 people have
bought this product” or “500 people have made a reservation in the last 24
hours”, users develop a positive perception of the product or service.
- Friend Recommendations: By showing users that their friends have
purchased a certain product or used a service, social network pressure is
created.
- Real-Time Notifications: Notifications such as “Elanur just bought this
ticket!” or “10 people are currently reviewing this hotel” encourage users to
make quick decisions (Convertize, 2025; Deceptive Design, 2025).
Today, many online shopping sites offer dynamic recommendations that
show which products similar users have purchased using an AI-powered
system that triggers peer pressure. For example, when a user wants to buy a
particular book, the system will suggest other products with the statement
“Users who bought this book also bought this book”. This technique
influences the user’s individual decision-making process and creates the
perception that “If people like me are buying it, it’s probably a good choice”.
Digital movie streaming sites have developed recommendation systems
that emphasize popular content, especially by using artificial intelligence-
supported algorithms. They utilize social proof principles by showing users
messages such as “This series is currently the 5th most watched content in
Turkey” or “Your friends have watched this content”. Furthermore, when
the user wants to cancel their subscription, the system can encourage them
to continue their subscription by increasing the social proof effect with
messages such as “Don’t miss the most watched content this month!”.
Peer pressure strategies are a powerful manipulation technique based
on the principle that people make decisions under the influence of their
Kadir Deligöz | 211

social environment. Artificial intelligence makes this process more complex,


allowing users to be manipulated through their social circles. While some
of these techniques aim to improve the user experience, they carry risks
of personal data misuse and unethical manipulation. When consumers are
unconsciously manipulated into making a purchase decision or encouraged
to use a service under social pressure, it is necessary to question the ethical
limits of these techniques. Adopting transparency and ethical design
principles in the digital marketing world and developing legal frameworks
to protect users from deceptive or manipulative manipulation is of great
importance.

3.4. Scarcity Principle


The scarcity principle is a type of dark pattern that forces users to make
quick decisions by creating the perception that products or services are
limited. Human psychology tends to perceive scarce resources as more
valuable and desirable. Therefore, digital platforms use scarcity strategies
to accelerate users’ rational decision-making processes, forcing them to
make hasty purchases or reservations (Nodder, 2009). This technique is
particularly common in electronic commerce, hotel booking sites and
ticketing platforms. Users are shown messages such as “Last 2 rooms left!”,
“This product is about to run out of stock!” or “10 people are currently
looking at this ticket!” to make a quick and impulsive purchase decision.
Artificial intelligence uses advanced data analysis methods to reinforce
the sense of scarcity and manipulate users’ decision-making process. These
techniques are as follows:
- Dynamic Scarcity Perception: By analyzing the user’s previous searches,
past purchase data and location information, AI can create personalized
scarcity messages. For example, a hotel booking site may show the message
“90% occupancy is now reached!” to a user who has previously searched for
a hotel in a specific city, but this message may be different for another user.
- Real-Time Scarcity Simulation: AI algorithms can optimize scarcity
messaging by analyzing the amount of time a user spends looking at a
particular product and page interactions. For example, when an airfare
search site notices that a user has looked at a flight several times, it can show
the message “Last 3 tickets left!”, thus making the user hurry.
- FOMO (Fear of Missing Out) Triggers: By taking advantage of users’
“fear of missing out” (FoMO) psychology, the perception of scarcity is
increased by informing them that other users have already purchased the
product. For example, a fashion e-commerce site shows messages such as
212 | Consumer Manipulation With Artificial Intelligence: Dark Patterns and Hidden Techniques

“20 people have bought this product in the last 15 minutes!” to help the
consumer make a quick decision.
- Fake or Exaggerated Stock Information: By analyzing the past
behavior of the user, artificial intelligence can present stock information in
a personalized manner. For example, when an online shopping platform
notices that a particular user frequently makes price comparisons, it can
display the message “Only 1 product left at this price!”, thus enabling the
user to make a quick purchase decision (Kim et al., 2023; Deceptive Design,
2025).
Booking websites, in particular, use scarcity tactics to speed up users’
accommodation search processes by using artificial intelligence algorithms.
When a user enters a hotel page, messages such as “Last 2 rooms left!” or
“50 people booked this hotel today!” are displayed.
AI-powered scarcity strategies are a powerful manipulation tool that
uses consumer psychology to help users make quick decisions. Unlike in
real scarcity situations, these strategies put users under pressure, causing
them to make unnecessary or hasty purchasing decisions. Raising consumer
awareness of these manipulative techniques and forcing platforms to
provide transparent inventory information can create a fairer digital trading
environment against scarcity illusion tactics (Cialdini, 2009).

3.5. Hidden Costs


Hidden costs are a type of dark pattern that manipulates consumers by
exposing them to additional fees, taxes or service charges that they did not
initially see during the purchase process. This strategy is particularly prevalent
in online shopping platforms, airline ticketing systems and subscription-
based services. It is used to deliberately steer the decision-making process by
exposing the user to additional costs that are not predetermined during the
purchase process (Weinberg, 2018). For example, on an online shopping
platform, the price of the product is attractively displayed, but at the
checkout stage, shipping fees, transaction fees or additional taxes are added.
In another example, an airline may initially offer a low-priced ticket, but then
add additional costs such as seat selection, baggage allowance or transaction
fees later in the ticketing process.
By analyzing users’ price sensitivity, purchase history and payment
trends, AI can dynamically determine which hidden costs to apply to which
customers. This may result in some customers facing more hidden costs,
while others may see different discounts.
Kadir Deligöz | 213

Artificial intelligence-assisted hidden cost manipulation is realized


through the following methods:
- Price Sensitivity Analysis: Artificial intelligence analyzes a user’s past
purchase data to determine how price sensitive they are. Customers with less
price sensitivity may be charged more hidden costs, as they are predicted to
be more inclined to complete the transaction.
- Dynamic Pricing: By analyzing the user’s geographic location, previous
shopping habits and browsing history, different surcharges can be displayed
for each customer. For example, a customer who has previously shopped in
the luxury category may be charged a higher shipping fee.
- Timing and Buying Psychology: Adding surcharges at the last stage,
when the user is most likely to make a purchase, can reduce the likelihood of
changing the purchase decision. For example, hidden costs can be added to
users who are shopping during a sale period, as they are less likely to cancel
the purchase because they think they’ve already gotten a deal.
- Cross-selling and Adding Additional Fees: Once users are in the buying
process, “additional services offered with this product” can be offered to
increase the total price unnoticed. For example, a user buying an airplane
ticket is told that “it is recommended that you buy extra baggage allowance
with this ticket”, thus gradually increasing the additional fees (Deceptive
Design, 2025; Binns, 2018).
Hidden costs are a manipulative technique that weakens the consumer’s
control over the shopping process and is becoming more complex with
artificial intelligence. When faced with hidden costs, users are often
manipulated into accepting additional charges rather than returning, thus
making them spend more.

3.6. Roach Motel (Cockroach Motel) Technique


The Roach Motel technique is a type of manipulative dark pattern that
allows users to easily sign up for services or subscriptions, while deliberately
making the cancellation process difficult. The basic logic of this strategy is
based on making the user’s entry process simple and fast, but the exit process
complex and cumbersome. It is widely used especially in subscription-
based services, digital platforms and applications that require membership
(Brignull, 2010). The most common applications of this technique are as
follows:
- Easy online registration, but complex cancellation procedure: While
signing up for a gym membership or digital streaming service can be done
214 | Consumer Manipulation With Artificial Intelligence: Dark Patterns and Hidden Techniques

in a few clicks, canceling can only be done through a phone call to customer
service, visiting the office at certain hours, or filling out lengthy forms.
- Intentionally hiding or redirecting the cancel button: The user may
have to navigate through multiple menus and sub-pages to find the cancel
option. For example, when trying to cancel a subscription, messages such
as “Are you sure you really want to cancel?” or “Keep your subscription to
continue enjoying special offers” may appear.
- Applying psychological pressure to get the user to back down: Users who
want to unsubscribe are shown phrases such as “You will lose these great
benefits!” or “Most users are happy with our service, why are you leaving?”
to create indecision (Deceptive Design, 2025).
By analyzing users’ tendency to cancel, AI develops personalized strategies
to get them to postpone their decision or abandon the cancellation process
altogether. These techniques aim to keep the user connected to the service
by making the cancellation process more difficult:
- User Behavior Analysis: Artificial intelligence algorithms can predict
which users are inclined to cancel. For example, users who have not used the
service for a long time or who have changed their payment methods may be
perceived as more likely to cancel, and specific intervention strategies can be
implemented for these people.
- Personalized Persuasion Messages: By analyzing the user’s previous
behaviors and interests, artificial intelligence can present the most appropriate
messages to persuade them. For example, when a music listening platform
notices that the user’s favorite artist has just released a new album, it can
say “Your favorite artist’s album will be released soon! Don’t cancel your
subscription!”.
- Deliberately Prolonging the Cancellation Process: When the user wants
to cancel, AI-powered systems can guide them through a multi-step process.
For example, a digital publishing platform may require the user to fill out a
questionnaire during the cancellation process, so that the user may get tired
and leave the process halfway through.
- Last Minute Special Offers: When AI algorithms realize that a user is
about to complete the cancellation process, they can offer them a special
discount. For example, “You just received a 50% discount! Do you want to
continue canceling your subscription?” (Deceptive Design, 2025).
Some online movie streaming and music streaming platforms try to keep
users on the service by deliberately complicating the unsubscribe process.
Kadir Deligöz | 215

For example: The user may have to click through multiple submenus to
find the cancel option. When the user tries to cancel, they may be presented
with special discount offers or suggestions for future content. At the final
stage, additional steps such as “Please fill out a survey before canceling your
membership” can be added to prolong the cancellation process.
To counter this manipulative tactic, it is crucial that more transparent
consumer policies are developed, users are made aware of their rights,
and digital service providers adopt ethical design principles. Making users
more aware of such manipulations and applying to regulatory bodies
when necessary will be one of the most effective defenses against unethical
marketing strategies such as Roach Motel.

3.7. Bait and Switch


Bait and switch is a type of manipulative dark pattern that lures users
to take a certain action by tempting them with an attractive offer, but
results in a worse option being offered later in the process instead of the
promised one. This strategy, which is based on deliberately misleading
consumer expectations, is widely used in various digital domains, especially
e-commerce, financial services, subscription-based platforms, and mobile
applications (Gray et al., 2018).
While traditional bait-and-switch techniques target the general user
audience, artificial intelligence makes this process much more personalized
and offers manipulative content based on users’ individual tendencies.
AI-powered bait and switch strategies include the following:
- Behavioral Data Analysis: By analyzing users’ past shopping habits,
price sensitivities and interests, AI can identify the most attractive offers that
will attract them the most.
- Dynamic Content Manipulation: AI can show attractive offers or
discounts when a user logs into the platform, only to remove them at the
point of purchase and offer higher prices. For example, an airline ticket
platform may show the ticket the user is looking for at a low price at the first
login, but claim that the price has increased later in the purchasing process,
allowing the user to make a quick decision.
- Customized Alternative Presentation: When it is determined that the
user wants to buy a specific product, AI-powered systems can direct them to
a more expensive alternative by indicating that stocks are out of stock or the
discount period has expired.
216 | Consumer Manipulation With Artificial Intelligence: Dark Patterns and Hidden Techniques

- Timing and Urgency Manipulation: By detecting the moments when


the user tends to make urgent decisions, AI can create a manipulative time
pressure with messages such as “Don’t miss this opportunity!”.
An online shopping site may announce a campaign such as “Big discounts!
Deals up to 70%!”. However, when the user visits the site, it may turn out
that the actual discount rates are much lower or that the most demanded
products have been excluded. The user is attracted to a service with a free
trial period, but later in the process may be hit with mandatory subscription
fees or unexpected additional costs.
Dark patterns are strategies that manipulate the user experience and direct
individuals’ conscious decision-making processes. Artificial intelligence
increases the effectiveness of these techniques, analyzing user behavior more
precisely and taking the manipulation to a personalized level. This situation
poses a significant problem in terms of consumer rights and ethical debates
and reveals the need to update regulatory frameworks and raise awareness
(Brignull, 2010; Gray and Kou, 2021; Deceptive Design, 2025).

4. Ethical and Social Implications of Artificial Intelligence-


Powered Dark Patterns
AI-driven dark patterns stand out as manipulative design strategies that
undermine consumers’ autonomy, freedom of choice and privacy. Digital
platforms use artificial intelligence algorithms to analyze user behavior,
influence individuals’ decision-making processes and direct them to perform
certain actions. These manipulative approaches have important consequences
not only on an individual level, but also on social and ethical dimensions
(Zuiderveen Borgesius, 2018). We can summarize these consequences as
follows:
- Declining Trust in Digital Platforms: As users encounter misleading
and manipulative experiences, they may lose trust in digital platforms and
online services. In the long run, this can undermine customer loyalty in the
e-commerce, digital media and online service sectors (Koops, 2018).
- Impacts on Digital Literacy: When users are unwittingly exposed to
manipulative strategies, they may struggle to understand how to act in the
digital world. This can undermine their ability to use the internet responsibly
and safely.
- Damage to Social Justice: AI-powered dark patterns can increase social
and economic inequalities. For example, some consumers may be subjected
to dynamic pricing strategies, while individuals with lower income levels
Kadir Deligöz | 217

may face higher prices. Such practices are contrary to the principles of
fairness and equality in terms of consumer rights.
AI-driven dark patterns raise serious ethical concerns, violating users’
rights and influencing individuals’ decision-making processes through
manipulation (Bostrom, 2019). From an ethical perspective, the problems
that these applications may create are as follows:
- Invasion of Privacy: By analyzing users’ online behavior, AI algorithms
can identify their weakest moments and use this information for manipulative
purposes. Users’ consent without knowing exactly what data is being
collected and how it is being used points to an unethical data management
process.
- Restriction of Consumer Freedom: AI-powered dark patterns can disrupt
users’ rational and informed decision-making processes, causing them to
suffer economically and psychologically. Deliberately restricting individuals’
freedom of choice should be considered an unethical practice.
- Normalization of Manipulation: The proliferation of AI-assisted
manipulation techniques may lead to the normalization of manipulative user
experiences. This may lead to social acceptance of systems that unconsciously
manipulate users’ decisions.
The fight against dark patterns requires a multifaceted approach. Raising
consumer awareness, adopting ethical design principles, establishing legal
regulations and developing technological solutions are of great importance
in this struggle.
In this context, consumer awareness raising activities can be carried
out first. Consumers’ knowledge about dark patterns will enable them to
recognize these manipulative techniques and act consciously against them.
Digital literacy trainings and public awareness campaigns can play an
important role in this regard.
In addition, having ethical design principles will enable consumers to
have a more transparent and fair user experience. User-friendly privacy
settings and clear information processes should be implemented instead of
techniques such as “Privacy Thinning”.
Legal regulations together with ethical design principles can limit dark
patterns (European Commission, 2020). In particular, banning manipulative
practices that do not obtain the explicit consent of the consumer, forcing
digital platforms to increase their transparency policies, and auditing and
criminalizing unethical algorithms are among the measures that can be taken
on these issues.
218 | Consumer Manipulation With Artificial Intelligence: Dark Patterns and Hidden Techniques

Conclusions And Recommendations


While the process of digitalization and the rapid development of artificial
intelligence technologies provide many advantages by personalizing the user
experience, they also offer new tools for consumer manipulation. Manipulative
design strategies, referred to as dark patterns, are deceptive techniques that
lead users to unconsciously perform certain actions. Today, AI-powered dark
patterns have become more sophisticated and effective, leading to negative
consequences such as forcing consumers into subscriptions, encouraging
them to share their personal data, leading them to unwanted purchases, and
causing financial losses.
The ethical and social consequences of dark patterns jeopardize the
credibility of the digital ecosystem. These AI-powered manipulative practices
reduce trust in digital platforms, cause consumers to suffer economic
losses, and violate individuals’ personal privacy. In addition, the creation
of personalized manipulations using artificial intelligence algorithms causes
especially low-income individuals to be more at risk. For all these reasons,
consumer rights need to be protected, ethical design principles need to be
adopted, and regulatory frameworks need to be strengthened.
Consumers, designers, regulators and technology companies have a
shared responsibility to minimize the harms of dark patterns. The following
measures should be taken to create a more conscious, ethical and transparent
digital ecosystem against these manipulative techniques:
Ø Consumer Awareness and Increasing Digital Literacy
One of the most effective measures against dark patterns is to raise
consumers’ awareness and digital literacy. If users can recognize which
manipulative techniques they are exposed to and make informed decisions,
the effectiveness of such strategies will decrease.
Ø Adoption of Ethical Design Principles and Transparency in UX/UI
Practices
In order to prevent the spread of dark patterns, ethical design principles
should be adopted and transparency policies should be implemented in UI/
UX. Digital platforms should prefer transparent and user-friendly designs
instead of techniques that manipulate user experiences.
Ø Strengthening Legal Regulations and Establishing Audit Mechanisms
In order to bring dark patterns in line with ethical and fair trade rules,
consumer protection laws need to be developed and regulators need to
conduct effective audits.
Kadir Deligöz | 219

Ø Establishing Ethical Standards for Artificial Intelligence Developers


To prevent the proliferation of AI-enabled dark patterns, AI developers
should fulfill their ethical responsibilities and create user-friendly algorithms.
Ø Detecting and Preventing Dark Patterns with Technological Solutions
New technological solutions should be developed and offered to
consumers in the fight against dark patterns.
Artificial intelligence-supported dark patterns are manipulative techniques
that undermine consumers’ free will and pose serious ethical problems. In the
fight against such practices, it is critical to raise consumer awareness, adopt
ethical design principles, establish legal regulations, ensure that artificial
intelligence developers fulfill their ethical responsibilities, and prevent these
manipulations with technological solutions. All stakeholders need to take
responsibility to create a more fair, transparent and ethical digital ecosystem.
220 | Consumer Manipulation With Artificial Intelligence: Dark Patterns and Hidden Techniques

References
Anderson, J., Sarma, K. M., & Borning, A. (2020). Deceptive Design. Commu-
nications of the ACM, 63(10), 126-133.
Binns, R. (2018). Fairness in machine learning: Lessons from political philo-
sophy. Proceedings of the 2018 Conference on Fairness, Accountability, and
Transparency, 149–159. [Link]
Bostrom, N. (2019). Süper zekâ: Yapay zekâ uygulamaları, tehlikeler ve stratejiler
(F. B. Aydar, Çev.). Alfa Yayıncılık. (Orijinal eser 2014’te yayımlanmıştır)
Böhm, K. (2018). Privacy by Design. In: Encyclopedia of Big Data Technologies.
Springer, Cham.
Brignull, H. (2010). Dark patterns: Deception vs. honesty in UI design. Dark
Patterns. Retrieved January 10, 2025, from [Link]
Brignull, H., & Darlo, C. (2019). The Dark Patterns Tip Sheet. Retrieved from
[Link]
Cara, C. (2019). Dark patterns in the media: A systematic review. Network In-
telligence Studies, 7(14), 105.
Chen, L., Lee, K., & Nissenbaum, H. (2021). Bias in Algorithmic Transparen-
cy. Proceedings of the ACM on Human-Computer Interaction, 5(CSCW1),
1-32.
Cialdini, R. B. (2009). Influence: Science and practice. Pearson Education.
Convertize. (2025). Social proof & dark patterns: How marketers manipulate
consumers. Convertize. Retrieved January 7, 2025, from [Link]
[Link]/social-proof-dark-patterns/
Coussement, K., & Van den Poel, D. (2006). Churn prediction in subscripti-
on services: An application of support vector machines while comparing
two parameter-selection techniques. Expert Systems with Applications,
34(1), 313–327. [Link]
Cranor, L. F. (2000). Internet privacy. Communications of the ACM, 43(9),
29-31.
Deceptive Design. (2025). Dark Patterns. Retrieved February 7, 2025, from
[Link]
Ducato, R., & Marique, E. (2018). Come to the dark side: We have patterns.
Choice architecture and design for (un)informed consent. Choice Archi-
tecture and Design for (Un) Informed Consent.
European Commission. (2018). General Data Protection Regulation (GDPR).
European Commission. (2020). Shaping Europe’s Digital Future. Retrieved
from [Link]
-digital-future
Kadir Deligöz | 221

Federal Trade Commission. (2022). Available at: Bringing dark patterns to


light. Assessed on September 26, 2022 [Link]
bringing-dark-patterns-light.
Følstad, A. (2020a). Dark Patterns in UX Design. In: Proceedings of the 2020
CHI Conference on Human Factors in Computing Systems. ACM.
Følstad, A. (2020b). What do users consider as dark patterns?. In Proceedings
of the 2020 CHI Conference on Human Factors in Computing Systems
(pp. 1-14).
Gkikas, D. C., & Theodoridis, P. (2022). AI in consumer behavior. In Ad-
vances in artificial intelligence-based technologies (Chapter 10). Springer
Cham. [Link]
Gray, C. M., Kou, Y., Battles, B., Hoggatt, J., & Toombs, A. L. (2018). The
Dark (Patterns) Side of UX Design. In: Proceedings of the 2018 CHI
Conference on Human Factors in Computing Systems. ACM.
Gray, C., & Kou, Y. (2021). “The Ethics of Dark Patterns in UX Design.” Journal
of Business Ethics, 169(3), 425-440. doi:10.1007/s10551-020-04443-3.
Kim, K. (K.), Kim, W. G., & Lee, M. (2023). Impact of dark patterns on con-
sumers’ perceived fairness and attitude: Moderating effects of types of
dark patterns, social proof, and moral identity. Tourism Management,
98, 1–14. [Link]
Koops, B. J. (2018). The Concept of Privacy in the Digital Age. In: Proceedin-
gs of the 2018 International Conference on Information Systems. AIS.
Narayanan, A., Mathur, A., Chetty, M., & Kshirsagar, M. (2020). Dark pat-
terns: Past, present, and future: The evolution of tricky user interfaces.
ACM Queue, 18(2), 67–92.
Nielsen, J. (1994). Usability engineering. Morgan Kaufmann. [Link]
[Link]/books/usability-engineering/
Nodder, C. (2009). Evil by Design: Interaction Design to Lead Us into Temp-
tation. Wiley.
Waldman, M. (2020). Privacy as Trust. Cambridge University Press.
Weinberg, B. D. (2018). The Dark Side of UX Design. In: Proceedings of the
2018 CHI Conference on Human Factors in Computing Systems. ACM.
Weinberg, G. (2018). Dark patterns in the design of digital interfaces.
Wikipedia. (2025). Social proof. Wikipedia, The Free Encyclopedia. Retrieved
March 7, 2025, from [Link]
Wilson, B. (1997). The Annoying Pop-Up Ads.
Zeller, W. (2014). Mobile application usability. Morgan & Claypool.
Zhang, Y., & Liu, Z. (2021). The Addictive Nature of Social Media. Journal of
Computer-Mediated Communication, 26(5), 325-342.
222 | Consumer Manipulation With Artificial Intelligence: Dark Patterns and Hidden Techniques

Zuboff, S. (2019). The Age of Surveillance Capitalism: The Fight for a Human
Future at the New Frontier of Power. New York: Public Affairs.
Zuiderveen Borgesius, F. J. (2018). Dark Patterns in the Digital Economy. In:
Proceedings of the 2018 International Conference on Information Sys-
tems. AIS.
Chapter 13

Social Responsibility and Ethical Approaches in


the Management of Artificial Intelligence

Nesibe Kantar1

Abstract
Artificial intelligence, whose technological foundations were laid by the
end of the 20th century, is on the one hand progressing towards taking
away the intellectual competence of human beings, and on the other hand,
it is redesigning our lives from production to marketing, from health to
our cultural acquisitions. Unlike the traditional, the restructuring process
is most felt in the field of culture and values. While the economic and
commercial success that artificial intelligence has demonstrated in the field
of production and marketing satisfies our desire to earn and produce more,
on the other hand, ignoring human-centered ethics in societies, institutions
or communities that do not use technology or do not have advanced
artificial intelligence technologies can cause a number of ethical problems.
Regardless of its purpose or field, the use of artificial intelligence in line with
ethical responsibility and ethical principles in a way that will contribute to
the ethical development and progress of humanity is not only a matter of a
society or community, but of all humanity. It is an unethical situation known
to everyone that marketing activities or technology producing companies
manipulate the actions and activities of the end user. On the other hand,
the unethical sharing of user information by social media companies with
other organizations or companies regarding the special vulnerabilities or
needs of individuals, and the violation of data privacy have made the use of
ethical artificial intelligence one of the most important issues in the world
of informatics. Every private or legal entity in the production-distribution
segment of companies must assume ethical responsibility in their actions.
This study first presents a historical perspective with the aspects that brought
artificial intelligence, the strongest argument of the informatics revolution,
to the present day. Secondly; ethical concepts and methods that can help in
coping with social and ethical difficulties caused by non-human factors such

1 Dr. Öğretim Üyesi, Kırşehir Ahi Evran Üniversitesi, nesibekantar@[Link],


ORCID ID: 0000-0003-3179-2314

[Link]
223
224 | Social Responsibility and Ethical Approaches in the Management of Artificial Intelligence

as robots, softbots and artificial intelligence devices in the informatics society


are explained. All economic activities, including local or global marketing, of
artificial intelligence, which has the potential to shape the development of the
world, should be shaped according to ethical needs by focusing on humans.
Finally, the study draws attention to the importance of human-centered
trustworthy artificial intelligence in the context of social responsibility.

1. An Overview at Artificial Intelligence from the Information


Revolution
The information revolution refers to the period in which technology
was designed with information, cybernetics and technology studies after the
Industrial Revolution, and at the same time information was designed and
produced with technology. Norbert Wiener’s Cybernetics (1948), Claude
Shannon’s Information Theory (1949), and developments in computer
technologies in the late 19th and early 20th centuries are the characteristic
disciplines of the information revolution.
In the beginning, electronic computers were large and cumbersome
because they used many vacuum tubes. The development of valves (valves
that provide electronic flow) in the 1960s and transistors supported by
integrated circuits and microprocessors in the 1970s brought computers
to an ergonomic structure. Improvements in integrated circuits and silicon
chip technologies have made it easier to use computers almost everywhere.
The impact of the information revolution has been realized on the widest
scale worldwide with the hypertext transfer protocol http (Hyper Text
Transfer Protocol), which is designed to receive, transmit and display data.
The World Wide Web (WWW) was developed in 1989 and became official
with the protocol signed at CERN (Kizza, 2017: 8). The effect of the
information revolution has deepened by eliminating the space constraints of
access to information resources such as online music, digital health, internet
television, digital telephone, digital communication systems, e-shopping,
and e-government through the internet, which allows the creation of a
virtual atmosphere parallel to physical reality by connecting to each other via
internet protocols (TCP/IP). Our world has become increasingly globalized
with the Internet, the World Wide Web, which represents the infrastructure
where all devices and servers are connected to each other, and the social,
cultural and economic impact of the information revolution.
Norbert Wiener, one of the founders of cybernetics, and his colleagues
developed a computerized calculation method that tracked fighter planes
in the air and predicted the trajectory of enemy aircraft during World War
Nesibe Kantar | 225

II. (Bynum , 2000). The opportunities provided by cybernetic studies that


enable communication between human-machine and machine-machine have
initiated the “automatic age”, as the term used to refer to unmanned systems
(Bynum, 2009: 25-48). The ethical discussions of automata and intelligent
systems were also initiated by Wiener during the rise of cybernetic studies
(Wiener, 1960).
Cybernetics is undoubtedly one of the most important developments
of the information revolution. Cybernetics is the branch of science that
establishes the law of communication that is equally valid for living beings
and machines (Porush, 1987: 54). Cybernetics is a discipline that enables
interaction between humans, machines and society through control and
communication theories. (Ashby, 1956: 1). As seen in Figure 1, thanks
to Cybernetics, data and information from different branches of science
such as biology, physics, mathematics, social sciences and engineering are
integrated to produce a new output. In addition to the interaction of living
and non-living systems, cybernetics, as a surveillance and control system, is
an important actor in the scientific information revolution and an important
milestone in the point where artificial intelligence technologies have reached.

Figure1. Fundamentals of Cybernetics. (Novikov, 2016: 10)

Claude Shannon’s “Information Theory” constitutes the architectural


structure of the information revolution technically, where data transmission
is carried out as a portable type between the message receiver and transmitter
via telephone wires, TV cables, radio signals and digital computers (Shannon,
1948: 2).
Quinn defines all devices that enable the creation, storage, processing,
exchange and distribution of data, audio or images through information
226 | Social Responsibility and Ethical Approaches in the Management of Artificial Intelligence

technologies as the actors of the information revolution (Quinn, 2006:


39). A strong reflection of this revolution today is artificial intelligence
technologies.
Artificial intelligence is one of the popular technologies that the
technologies developing with the information revolution have brought to
our agenda. Although artificial intelligence is defined in different ways, it is
possible to define artificial intelligence in its most well-known form as the
integration of the ability to learn and solve problems related to all kinds of
acquisitions specific to the human species into information technologies and
systems. Artificial intelligence refers to the ability of intelligent computational
machines with algorithms and mathematical calculations to perform tasks of
human intelligence in a human-like manner.
AI technologies simulate cognitive functions such as solving problems,
learning for expected output, understanding language that enables
communication between human-machine and machine-machine, and
creative thinking with data and data sets. Techniques such as machine
learning, deep learning, and natural language processing constitute the basic
elements of artificial intelligence. Figure 2 shows the basic working logic of
AI.

Figure 2. How AI Work. ([Link], 2024)

Although artificial intelligence refers to a technical field, it has a complex


structure that cannot be attributed to a single branch of science. In fact,
the main purpose of artificial intelligence is to solve problems without
focusing on a specific field. AI, which is a mental activity that distinguishes
Nesibe Kantar | 227

humans from other beings in terms of being a thinking being, focuses on


the problem itself with algorithms and mathematical models and carries
out different optimizations for the solution of this problem with human-
like experience acquisitions. It is closely related to different branches of
science and disciplines, from philosophy to biology, from mathematics to
marketing and business. Because the central concept in artificial intelligence
is knowledge. As in other positive sciences and social disciplines. In its most
primitive definition, the data is modeled with different methods such as
machine learning, natural language processing, expert systems in the stages
it passes through as input-process and output.
The development stages of AI systems begin with Data collection, which is
the data collection phase that takes place through sensors or human-generated
sources to train and evaluate the AI ​​model. The collected data is processed,
classified and cleaned by field experts in the Feature engineering stage with
statistical analysis or automatic feature selection techniques to be used in the
training of the artificial intelligence model in the Data preprocessing stage.
The AI ​​model architecture and algorithm for the relevant field are selected
according to the problem-oriented statistical models, machine learning
algorithms or deep learning architectures, and the determined model is
trained using the prepared data in the Model development phase. In the
model evaluation and model optimization stages, studies are carried out to
develop and improve the model created (Weka, 2024).
Although the emergence of artificial intelligence in its modern form,
inspired by the structure and functions of the human brain, dates back to the
1956s, the earliest research on machines being able to think was developed
through the collaborative work of scientists specializing in different fields
in the late 1930s, 1940s and 1950s. In fact, its foundations were laid at the
Macy Conferences moderated by McCulloch, titled “Circular Causal and
Feedback Mechanisms in Biological and Social Systems” between 1946 and
1948 (Pias, 2016: 12). In neuroscience research, the definition of the brain
as an electrical network of neurons has brought about the artificial modeling
of the human brain. Norbert Wiener’s cybernetics and Claude Shannon’s
theory of information have made it possible to describe digital signals. Alan
Turing showed with his theory of computation that any computation could
be described digitally. All these ideas suggested that it might be possible
to design an “electronic brain”. In the process, these studies were brought
to the agenda again at the Dartmouth conference and in the summer of
1956 under the leadership of scientists such as Marvin Minsky, John
McCarthy and Carnegie-Mellon, Allen Newell and Herbert Simon, a new
228 | Social Responsibility and Ethical Approaches in the Management of Artificial Intelligence

science-technology field called artificial intelligence took its current form


(McCorduck, 20024; 51-57).
In 1950, Alan Turing developed a study that tested the ability of a machine
to exhibit intelligent behavior equivalent to a human with the ‘imitation
game theory’. Turing’s study adapted natural language speech to a human-
like communication model between machine and machine (Turing,1950).
Turing’s study, which adapted the electronic brain model to human spoken
language, took artificial intelligence to a different dimension and made
him one of the pioneers of today’s artificial intelligence studies. The scope
and foundations of artificial intelligence are composed of expert systems
for solving data-related problems, robotics, natural language processing
technologies based on speech and understanding that enable machine
communication, optical instrument technologies that are independent
of physical interaction, computer vision that includes data perception,
collection and classification activities, machine learning, and deep learning
models produced by artificial neural networks by creating more than one
artificial neural network. Artificial intelligence, which realizes human-
specific abilities such as learning and problem solving through information
technologies, has played an important role in producing meaningful results
through algorithms aimed at solving inputs from large data sets, as well as
in the optimization of existing ones and the emergence of new inventions.
The information revolution has affected our social life, habits, and also
our scientific and thought methods. The philosophy of artificial intelligence
is one of these. Through new technologies, it has caused us to reconsider
human life and scientific methods, the nature of intelligence and reason,
which are the characteristics that distinguish us from other beings, their
limits, our consciousness, moral concepts such as will, decision-making,
and freedom, and to reach different philosophical conclusions with new
definitions. The philosophy of artificial intelligence, which addresses the
ethical, epistemological, ontological and social dimensions of artificial
intelligence technology, has brought to the philosophical field the issue
of whether a machine can have consciousness, the possibility of ethical
decisions with machines, and the effects of developing a technical solution
to technology-based ethical problems on human nature. This study, which
addresses ethical problems in the ecosystem where artificial intelligence
technologies, which are the subject of the current study, are created and
ethical approaches to solving these ethical problems, is the result of such an
impact.
Nesibe Kantar | 229

2. Ethics Responsibility For The Information Society


During the Second World War, and almost immediately thereafter,
several powerful information technology advancements were made. After
that, during the 1950s and later decades, information technology advanced
rapidly. By the mid 1990s, worldwide use of the Internet had already
produced major impacts upon political, social, and economic circumstances.
More and more people found themselves living in a “cyber-world” created
and sustained by a vast network of interconnected digital devices. The world
today has become a place with innumerable inter-cultural interactions, and
the “Information Age” has arrived.
Today, whether they like it or not, nearly everyone is becoming a member
of the worldwide “cyber-community”. So, in the comfort of their own
specific culture, without traveling in a car or train or airplane, people can
easily interact with other people in many different cultures. Because of this,
the “Information Revolution” is changing traditional habits and generating
new and profound ethical questions and challenges. For example: Are there
common ethical values and principles shared by all human beings, or does
each specific culture or subculture have its own ethical values? To address
such questions, an effective ethical theory for living in a massively interconnected
multicultural world is needed! One such theory is Flourishing Ethics put
forward by American philosopher Terrell Ward Bynum in 2006 (Bynum,
2006). The most important feature of this theory is that humanity assumes
the responsibility for the ethical development of the individual and society.
Because the first important responsibility that individuals and society
should undertake in their commercial activities or daily work should be
to flourish ethically. The most important mission of artificial intelligence
technology manufacturers and other smart technologies should be to
strengthen, support or provide opportunities for the ethical flourishing of
humans. Focusing on the benefits of a technology only in terms of economic
activities and evaluating it based on measurements related to this will
deficient the ethical flourishing of humans and society, so the responsibility
for ethical development should be at the core of technologies. Indeed, The
power of any technological product, be it artificial intelligence or whatever
its name, is directly proportional to the ethical responsibility it assumes.
Let’s take a closer look at the theory that addresses human flourishing as a
kind of ethical responsibility in societies designed with artificial intelligence
and intelligent computational technologies.
230 | Social Responsibility and Ethical Approaches in the Management of Artificial Intelligence

Bynum’s Flourishing Ethics Theory includes both Human-Centered


Flourishing Ethics and General Flourishing Ethics. This theory is an
“umbrella-like” overarching conception of ethics, which is broad enough
to include not just traditional Western values and principles—like those of
Virtue Theory, Utilitarianism, Deontology, and Social Justice Theory—
but also values and principles of major Eastern traditions like Buddhism,
Confucianism, and Taoism. According to this theory, social responsibility
in the point of view of ethics is a significant concept in the technological
societies.
In addition to providing a means of ethically evaluating human actions,
Flourishing Ethics also can be used to guide and govern decisions and
actions of newly-emerging nonhuman agents like robots, softbots, and AI
devices that are currently being created and deployed in many different
societies. If nonhuman agents contribute to human flourishing, and they do
not also damage human flourishing, and if these technologies take on the
responsibilities of individuals and societies for ethical development, they can
be considered as appropriate or useful tools for use.
Of course, Flourishing Ethics is not a panacea that can easily answer
all ethical questions in our increasingly complex interconnected world. The
important point here is that it provides promising and powerful ethical
concepts and methods to help with a growing number of social and ethical
challenges of the Information Age.

2.1. What is Flourishing Ethics as an ethical theory with


potential for ethical perspective on artificial intelligence and social
responsibility?
In his article “Flourishing Ethics”, Bynum said this:
I call the new theory ‘Flourishing Ethics’ because of its Aristotelian roots,
though it also includes ideas suggestive of Taoism and Buddhism. In spite
of its roots in ancient ethical theories, Flourishing Ethics is informed and
grounded by recent scientific insights into the nature of living things, human
nature and the fundamental nature of the universe—ideas from today’s
information theory, astrophysics and genetics. . . . Rather than replacing
traditional ‘great ethical theories,’ Flourishing Ethics is likely to deepen and
broaden our understanding of them (Bynum, 2006, p. 157).
Bynum’s Flourishing Ethics assumes that people in every culture share a
common human nature, and also that human flourishing is the highest ethical
value. These assumptions, taken together, yield a set of ethical values and
principles that apply to every human being in every culture. In addition,
Nesibe Kantar | 231

since individual cultures and subcultures typically include culture-specific


values and traditions, human flourishing within a given culture can depend
also upon the culture-specific values of that culture. So Bynum’s Flourishing
Ethics accommodates culture-specific values when they do not harm human
flourishing elsewhere.
To determine what is required for humans to flourish, Bynum adopted
the strategy of asking this question: For all humans, what deficiencies would
make it impossible for them to flourish? The results were these (see Kantar and
Bynum 2022):
1. Autonomy—the ability to make significant choices and carry them
out—is a necessary condition for human flourishing. For example, if
someone is in prison, or enslaved, or severely pressured and controlled
by others, such a person is not flourishing.
2. To flourish, people need to be included in a supportive community.
Knowledge and science, wisdom and ethics, justice and the law are all
social achievements. Also, psychologically, humans need each other to
avoid loneliness and feelings of isolation.
3. The community must provide—at least reasonably well—security,
knowledge, oppor-tunities, and resources. Without these, a person
might be able to make choices, but nearly all of the possible choices
could be bad ones, and a person could not flourish under those
conditions.
4. To maximize flourishing within a community, justice must prevail.
Consider the traditional distinction between “distributive justice” and
“retributive justice”: if goods and benefits are unjustly distributed,
some people will be unfairly deprived, and flourishing will not be
maximized. Similarly, if punishment is unjustly meted out, flourishing,
again, will not be maximized.
5. Respect—including mutual respect between persons—plays a
significant role in creating and maintaining human flourishing.
Lack of respect from one’s fellow human beings can generate hate,
jealousy, and other very negative emotions, causing harmful conflicts
between individuals—even wars within and between countries. Self-
respect also is important for human flourishing in order to preserve
human dignity and minimize the harmful effects of shame, self-
disappointment, and feelings of worthlessness.
232 | Social Responsibility and Ethical Approaches in the Management of Artificial Intelligence

2.2. General Flourishing Ethics, “smart” technology, and emerging


global ethics
In the article Flourishing Ethics, Bynum made the following important
prediction:
Flourishing Ethics has a significant potential to develop into a powerful
‘global ethics’—one that is rooted in the ultimate nature of the universe
and all the entities that inhabit it—one that will shed new light upon ‘the
great ethical theories’ of the world, while providing novel insights and
contributions of its own (Bynum 2006, p. 171).
A helpful contribution of Bynum himself was his recognition that
Flourishing Ethics should be broadened and divided into two “types”:
The first type is Human-Centered Flourishing Ethics, which recognizes the
dignity and worth of human beings as the top ethical values. The second
type is General Flourishing Ethics, which continues to keep human worth and
dignity at the top, but also acknowledges the intrinsic ethical value of other
existing entities. Such broadening of ethical respect actually began to occur
years ago with developments like the environmental ethics movement, the
animal rights movement, efforts to limit global warming, and so on.
Bynum’s ethical explanations and analyses are informed and grounded by
recent scientific insights into the nature of living things, human nature and
even the fundamental nature of the universe—ideas from today’s information
theory, astrophysics and genetics. As a result, his “broader view” is this:
From the point of view of Flourishing Ethics, it is not unreasonable to
place a strong emphasis upon the flourishing of human beings and their
societies. . . . On the other hand, besides humans and their communities,
there are other intrinsically good entities in the universe. . . . Flourishing
Ethics takes these into account as well. Non-human animals, plants,
ecosystems, even certain machines decrease entropy in their local regions of
space-time, and thereby preserve and increase the good. Even ‘inert’ objects
like stones, mountains, planets, stars and galaxies are persisting patterns of
Shannon information. [So] Flourishing Ethics fosters respect for all of these
sources of the good (Bynum 2006, p. 172).
Of special interest in today’s world is the growing number and complexity
of information technology devices like robots, softbots, and chatbots.
Such devices sometimes make decisions and carry them out without
human intervention. At the present time people worldwide are especially
concerned about artificially intelligent chatbots, which can learn from their
“experiences” and change their behavior in unexpected ways.
Nesibe Kantar | 233

3. Ethical Responsibility and Management of Artificial Intelligence


The agent factor is a very important factor in the collaboration activities
that multiple artificial intelligence technologies come together to achieve
a goal. For example, in the ecosystems that form artificial intelligence
collaborations, who will be authorized to use the data, the unauthorized use
of data by other parties, the limitations to be applied to data manipulation,
etc., and who will assume the ethical responsibility of the system in solving
these issues and problems are quite controversial issues.
There are views that argue that ethical responsibilities should be shared
by users in a common way regarding who should assume the ethical
responsibilities of artificial intelligence and what and according to what
principles it should be governed. Indeed, in ecosystems created by more
than one artificial intelligence, machines should make joint decisions and
cooperate on actions. Being in the same ecosystem also means sharing
responsibility. According to Stahl, who advocates a shared responsibility
model in the artificial intelligence ecosystem, sharing responsibility among
different actors such as software developers, users, and institutions (Stahl,
2023) is necessary for ethical outcomes to be undertaken.
One of the important issues here is that the agents in the AI ​​ecosystem
are designed according to the purpose of the ecosystem. The components
in the AI ​​ecosystem develop together for the same purpose and feed off
each other (Ritala and Almpanopoulou, 2017: 39-40). The data that is the
output of one system can be the input of another system. This can be a
strength of complex AI management, but it can also be a source of social
and ethical problems.
Ultimately, although artificial intelligence models are produced for
economic purposes such as commercial, educational or marketing, they
produce social, ethical and cultural results as individuals determine and
influence society’s actions. For this reason, the management of artificial
intelligence systems and the ecosystem they build is extremely vital in terms
of social responsibility, not just economic importance.

4. Trustworthiness in the Management of Artificial Intelligence


and Human-Centered AI
Despite criticisms about the technique used by machine learning in the
processing stage of data, the transparency and bias produced by the methods
and producing wrong answers, Artificial Intelligence is one of the most
important technical developments of the century that has the potential to
shape the future of the world. This potential will produce a result depending
234 | Social Responsibility and Ethical Approaches in the Management of Artificial Intelligence

on whether we create AI in what degree that contributes to the development


of humanity. This vital issue is still waiting as a problem waiting to be
solved on the table of all humanity. For this reason, all studies investigating
ethical problems and seeking solutions in artificial intelligence studies are an
important effort on behalf of humanity. The issue of AI and ethics does not
belong only to a single country or culture, but is the common problem of
all humanity in the face of developing technology. The trustworthy of AI
and the other problems it produces are a worry for all countries, making it
almost imperative to develop minimum common solutions that can be valid
on a local and global scale. Since we have common ethical problems with
the dimensions that affect us; Why shouldn’t it be possible to address issues
from a common ethical perspective and seek appropriate solutions by at least
meeting on minimum common principles? While this may seem difficult in
practice - at least for now - it is not impossible.
There are several controversies in determining AI ethical principles and
appropriate ethical statements. At the beginning of these discussions is the
application of topics and concepts such as human autonomy, human agency
and oversight, Diversity, non-discrimination and fairness. As the application
of these principles differs between nations and countries; It is getting harder
to reach a consensus on what principles and rules belong to the culture, belief
and law that are expected to be integrated into AI should be. According to
what and how should nations act in establishing AI ethical principles?
There are ethical declarations different from each other for the
development of trustworthy AI technologies, which creates a controversial
issue.
These and other questions regarding the development and management
of artificial intelligence reveal the necessity of a reliable ethical declaration
that is accepted by everyone and contributes to the ethical development
of humans. It is clear that artificial intelligence management that is not
transparent or does not promise the use of models that can be explained to
the parties will lack ethical sensitivities. The correct and ethical application
and management of artificial intelligence models that prioritize the ethical
development of humans by focusing on humans and ensuring the reliability
of data will affect the commercial, cultural and all activities of the information
society (European Commission).

Conclusion
Artificial intelligence systems consist of codes and hardware designed and
written by humans to achieve a purpose. They are systems that can receive
Nesibe Kantar | 235

thousands of highly complex data from the external environment, redesign


the data they have and change its form, collect and analyze visual, auditory
and textual data. More than one artificial intelligence can come together
for a similar purpose to form a system, this large environment is called the
artificial intelligence ecosystem.
In the ecosystem created by artificial intelligence systems, new data can
be obtained, as well as reinterpretation of structured data. These activities
are important for processing data and making optimal decisions on data.
Although artificial intelligence has technical achievements such as machine
learning, symbolic processing, image processing, it is an interdisciplinary
field with a priority on social responsibility since it is the work of modeling
human actions ( Stahl and friends,2020).
In artificial intelligence systems, it is an extremely important issue that
the model product should prioritize human development in an ethical
context beyond its economic benefit. In artificial intelligence and ecosystem,
regardless of the product output, human ethical development should be
at the center. The components of the Flushing ethical theory as ethical
responsibility in artificial intelligence management are therefore explained
in detail in the study. It is an issue that everyone agrees on, without a doubt,
that non-human-centered technologies will harm the organic structure
of the individual and society. The ethical development of humans should
be the main concern of state administrators and policy makers as a social
responsibility. As a matter of fact, as we explained the evolution of artificial
intelligence in the first section, technology is developing more and more
rapidly and expanding its scope of application. This makes it difficult to
manage artificial intelligence and its ecosystems.
Trustworthy artificial intelligence design in artificial intelligence
management will contribute to the ethical flourishing of humans. In the
context of social responsibility, the use of human life should not be left
to machine reasoning alone, and this should be formulated with ethical
principles and rules and interdisciplinary practices.
Companies that use artificial intelligence, including marketing, need to
be aware of the impacts they create on society. Ecosystems designed with
ethical approaches are needed to create a sustainable and fair marketing
strategy.
236 | Social Responsibility and Ethical Approaches in the Management of Artificial Intelligence

References
Aristotle (2009). On the Movement of Animals; On the Soul: Nicomachean Ethics;
and Eudemian Ethics.
Ashby, W. R. (1956). ‘‘An Introduction to Cybernetics’’. John Wiley and Sons.
[Link]
Bynum, T. W. (2006). “Flourishing Ethics”, Ethics and Information Technology,
Vol. 8, No. 4, pp. 157-173.
Bynum, T. W. (2009). Milestones in the History of Information and Computer
Ethics. In: Kenneth E. Himma and H.T. Tavani (eds). The Handbook
of Information and Computer Ethics. 1st ed. New Jersey, John Wiley &
Sons Ltd. 25-48.
Bynum, T.W. (2000). A Very Short History of Computer Ethics. The Resear-
ch Center on Computing & Society. [Link] edu/~ear/
cs349/Bynum_Short_History. html. 02. 08. 2024.
European Commission-Ethics Guidelines for Trustworthy AI | Shaping Euro-
pe’s Digital Future. (2019, April 8). [Link]
eu/en/library/ethics-guidelines-trustworthy-ai (01.05.2024)
Kantar, N. and Bynum, T. W. (2021). “Global Ethics for the Digital Age—
Flourishing Ethics.” Journal of Information, Communication and Ethics in
Society, Vol. 19, No. 3, pp. 329– 44.
Kantar, N. and Bynum, T. W. (2022). “Flourishing Ethics and identifying ethi-
cal values to instill into artificially intelligent agents.” Metaphilosophy, Vol.
53, No. 5, pp. 599-604.
Kizza, J. M. (2017). Ethical and Social Issues in the Information Age. (6th Ed.)
Springer International Publishing.
McCorduck, P. (2004). Machines Who Think (2nd ed.), Natick, MA: A. K.
Peters, Ltd.
Moor, J. H. (1999). “Just Consequentialism and Computing”, Ethi-
cs and Information Technology, Vol. 1, No. 1, pp. 61-65. DOI
10.1023/A:1010078828842
Novikov D.A. (2016). Cybernetics: From Past to Future. Heidelberg: Springer.
Pias, C. (2016). Cybernetics: The Macy Conferences 1946-1953. The Comple-
te Transactions, Diaphanes.
Porush, D. (1987). Reading in the ServoMechanical Loop. Berkeley, Calif. Dis-
course, 1987-04-01, 9: 53-63.
Quinn, M. J. (2006). Ethics for the Information Age. (2nd Ed.). Pearson Inter-
national Edition Inc.
Shannon, C. E. (1948). A Mathematical Theory of Communication. The Bell
System Technical Journal, 27(3): 379-423.
Nesibe Kantar | 237

Stahl, B. C. (2023). Embedding Responsibility in Intelligent Systems: From


AI Ethics to Responsible AI Ecosystems. Scientific Reports, 13(1), 7586.
[Link]
Stahl, B., Andreou, A., Brey, P., Hatzakis, T., Kirichenko, A., Macnish, K.,
Shaelou, S., Patel, A., Ryan, M., and Wright, D. (2020). ‘‘Artificial In-
telligence for Human Flourishing -Beyond Principles for Machine Le-
arning’’. Journal of Business Research, 124. [Link]
jbusres.2020.11.030
Turing AM (1950) Computing machinery and intelligence. Mind 59:433–460
Weka- [Link] (27.01.2024)
Wiener, N. (1948). Cybernetics: or Control and Communication in the Animal and
the Machine, Technology Press .
Wiener, N. (1950). The Human Use of Human Beings: Cybernetics and Society.
Houghton Mifflin. (Second Edition Revised, Doubleday Anchor, 1954.)
Wiener, N. (1960). Some Moral and Technical Consequences of Automation,
Science, 131: 1355–1358.
238 | Social Responsibility and Ethical Approaches in the Management of Artificial Intelligence
Chapter 14

Unrealistic Beauty Ideals: Artificial Intelligence


and Consumers’ Self-Image Perceptions

Feyza Nur Özkan1

Abstract
Digital transformation and the rapid enhancement of artificial intelligence (AI)
technologies cause unprecedented changes in the marketing environment.
As the technology evolved and AI tools were diversified, AI became more
effective in facilitating consumers’ lives and was adopted quickly by large
masses. While this technology offers numerous opportunities, it also poses
a serious threat to consumers’ well-being by shaping society’s beauty ideals.
People judge others according to their appearance, and beautiful-looking
people have a competitive advantage. Thus, beauty is perceived as important
and highly demanded by consumers due to its influential power. Although
beauty perceptions of consumers were culture-dependent and constantly
changed throughout history, they have become similar nowadays, with the
increase in communication and the effects of globalization.
The unrealistic and unattainable beauty ideals shaped and disseminated by AI
may damage consumers’ self-image perceptions, fill them up with insecurities,
and eventually result in serious health and consumption-related problems.
Therefore, this chapter aims to explain AI’s role in shaping beauty ideals
and AI’s adverse effects on consumers’ self-image perceptions and intends
to contribute to the literature on the dark side of AI in consumers’ beauty
and self-image perceptions context. This study is descriptive in nature and is
guided by the self-theory and social comparison theory. The present study
also discusses AI’s health-related and consumption-related effects and the
mindful use of AI for consumer well-being and building an inclusive society.

1 Ph.D., Istanbul University, School of Business, [Link]@[Link],


ORCID: 0000-0003-1346-3963

[Link]
239
240 | Unrealistic Beauty Ideals: Artificial Intelligence and Consumers’ Self-Image Perceptions

1. Introduction
Digitalization and the advent of artificial intelligence (AI) irreversibly
changed how consumers live and perceive the world. This paradigm shift
caused remarkable changes in consumer behavior and the marketing
landscape. AI has gradually integrated into consumers’ lives and finds
a place in almost every sphere of life with social media, chatbots, voice
assistants, recommendation systems, and Internet of Things (IoT) devices
(Barari et al., 2024). As AI technologies evolved and were widely adopted
by consumers, their role in enhancing customer experience (Grewal et al.,
2023) was discovered by the companies, and more engaging AI technologies
such as virtual reality (VR) try-on technologies and augmented reality (AR)
face filters were introduced. As digital transformation and AI evolve, the
technologies they incorporate offer numerous opportunities in the marketing
environment for consumers, companies, and society. However, AI also has
a dark side that leads to adverse, harmful, or unintended outcomes for the
actors in the marketing environment.
One of AI’s significant risks is its potential to negatively affect consumers’
self-image perceptions by shaping beauty ideals. AI enables consumers to
create and enhance visuals with a few clicks and companies to offer more
personalized and engaging customer experiences (Ameen et al., 2021).
The image editing tools for self-presentation in the digital world have been
diversified and enhanced with the power of AI and quickly adopted by
consumers who desire a perfect appearance. However, constant exposure to
AI-generated flawless faces and perfect-looking bodies may promote hyper-
idealized beauty standards and distort consumers’ self-image perceptions by
triggering social comparisons.
Desire for beauty and interest in beautiful people is a worldwide and
transhistorical phenomenon. Therefore, consumers’ desire for a perfect
appearance and their effort to conform to society’s beauty expectations
is not new. Beauty ideals were culture-specific before globalization and
the widespread use of web-based technologies. However, increased
communication and worldwide adoption of communication technologies
have removed the geographical and cultural barriers, and consumers’
perceptions of beauty have become similar. Consumers’ beauty perceptions are
experiential and influenced by the environment (Dimitrov & Kroumpouzos,
2023). Recent reports demonstrated that adult internet users’ average time
spent online is 6 hours and 38 minutes each day, and they spend 2 hours and
21 minutes of this time only on social media (We Are Social & Meltwater,
2025). Thus, consumers are exposed to beauty content daily on the internet
Feyza Nur Özkan | 241

and social media for a considerable amount of time, which is more than
enough to shape and change their perceptions of beauty. Related literature
demonstrates that consumers’ exposure to appearance-oriented content is
damaging (Yan & Bissell, 2014) and may cause negative health-related and
consumption-related consequences.
AI poses a serious threat to the well-being of consumers by generating
and enhancing idealized beauty visuals that may fill consumers up with
insecurities. Although consumers’ beauty and self-image perceptions in the
social media context attracted considerable scholarly attention (e.g., Ando
et al., 2021; Fioravanti et al., 2022; Laughter et al., 2023; Xie, 2024),
research investigating AI’s effect on consumers’ self-image perceptions is
limited. However, the influence of emerging technologies on consumers’
beauty perceptions is a prominent research theme in health sciences and
marketing (Singer & Papadopoulos, 2024). Therefore, AI’s role in shaping
beauty ideals deserves more attention in today’s digital landscape.
This chapter aims to explain AI’s role in shaping beauty ideals and its
adverse effects on consumers’ self-image perceptions in light of self-theory,
social comparison theory, and previous study findings. In addition, the
present study also discusses AI’s health-related and consumption-related
effects and the mindful use of AI for consumer well-being and building an
inclusive society.

2. The Evolution of Beauty Ideals in the Digital Age


Beauty is a complex concept. Due to its subjective nature, no commonly
accepted definition exists and is still discussed from philosophical, historical,
biological, and social perspectives (Wong et al., 2021). Some scholars argue
that it is easier to feel and recognize rather than to describe or define it (Alam
& Dover, 2001; Dayan, 2011). Although no clear definition of beauty
exists, its effects on our lives are undeniable. People tend to judge others
according to their appearance. This phenomenon is known as beauty bias, an
attributional bias that indicates positive perceptions toward attractive people
rather than unattractive ones (Struckman-Johnson & Struckman-Johnson,
1994). Consumers’ beauty perceptions not only affect mate selection but
also social interactions and self-esteem (Singer & Papadopoulos, 2024).
Beautiful people face fewer difficulties in life compared to ordinary people.
For example, they are treated better, employed with higher salaries, and
get even less severe punishments than unattractive people, even if they are
in the same position or have similar qualifications (Frederick et al., 2015).
Therefore, it is unsurprising that consumers try to achieve better looks and
conform to society’s beauty perceptions and expectations.
242 | Unrealistic Beauty Ideals: Artificial Intelligence and Consumers’ Self-Image Perceptions

According to Georgievskaya et al. (2025), beauty is the combination of


attributes that make a person subjectively perceived as aesthetically appealing
in a given cultural environment. However, beauty standards have constantly
changed throughout history. Early Greeks defined aesthetic perfection
with numeric symmetries and proportions (Alam & Dover, 2001). Mayan
culture linked beauty to food resources and tried to change their hair and
facial structures to look like corn (Frederick et al., 2015). Some cultures
valued body fat, while others valued thinness. While pale skin is considered
an essential beauty standard in Asian culture, and therefore, consumers are
heavily invested in skin-lightening products and medications (Dimitrov &
Kroumpouzos, 2023), solarium, skin bronzers, and sun tanning products and
services are quite popular and highly demanded in other cultures. Although
there were specific differences in beauty perceptions in different cultures in
history, global consumer culture has emerged with the irresistible effect of
globalization (Cleveland & Laroche, 2007), and the differences in beauty
perceptions have blurred. Global consumer culture creates global consumer
segments that assign the same or similar meanings to certain things (such as
beauty), and differences coming from culture become less important (Alden
et al., 1999; Keillor et al., 2001). Therefore, traditional cultural beauty
ideals have transformed into international norms through globalization.
These beauty standards in mass culture promoted flawless skin, symmetrical
faces, slim bodies, youthfulness, and Western looks (Georgievskaya et al.,
2025; Grech et al., 2024).
Alongside globalization and global consumer culture, digital technologies
have also altered how we perceive the world. Media plays a significant role in
this process and acts as a tool for the dissemination of certain beauty ideals
throughout the world (Yan & Bissell, 2014). Before the rise of the internet,
traditional media forms were dominantly shaping consumers’ beauty ideals
with celebrities in advertisements, movies, and TV series. The thin ideal
was promoted at that time, and below-average weighted female bodies were
portrayed in traditional media (Lewallen & Behm-Morawitz, 2016).
After the widespread use of the internet and the popularization of social
media, consumers become content creators and producers of images of their
own lives. Until the twentieth century, beauty was considered a distinctive
characteristic of a closed, prosperous elite beyond ordinary people’s
reach. However, consumers now have unlimited access to endless ideas to
develop tastes, opportunities to be beautiful, and building communities to
demonstrate their perceptions of beauty (Kuipers, 2022). This shift caused
social democratization, and beauty was also democratized. Body positivity
and naturalism have gained momentum, and consumers even criticize brands
Feyza Nur Özkan | 243

for using unrealistically perfect-looking models in their advertisements.


Leading fashion and personal care brands realized this movement and went
one step forward by promoting natural women in their advertisements and
launching natural beauty campaigns (Mabry-Flynn & Champlin, 2018).
Then, natural beauty trends went viral on social media, and something went
wrong. For example, “no-makeup makeup” and “clean girl” trends took
social media by storm. These trends seem like minimalist beauty trends,
which are effortless and easy to reach for everyone at first glance. However,
consumers who follow these trends need to spend remarkable amounts of
money on expensive new clothes, cosmetic products, decorative objects, etc.
Besides, following these beauty trends demands too much time, which is also
not even possible for the majority of working women. As we know, trends
come and go; they are only popular for a finite period. Although trends are
subject to change, social media’s effect on consumers’ beauty perceptions
stays the same. Thus, even though social media has a prominent role in
democratizing beauty, it remains a powerful source of appearance pressure
and perpetuates certain beauty ideals simultaneously (Bell et al., 2022).
Digitalization, internet adoption, and social media use have revolutionized
consumers’ perception and presentation of beauty. When AI was introduced
and gained popularity, consumers increasingly integrated it into their daily
lives, especially to make their lives easier. Companies also started to use AI
when they realized its essential role in shaping customer experience (Ameen
et al., 2021). However, despite numerous advantages, AI poses significant
risks for shaping consumers’ perceptions of beauty and promoting unrealistic
beauty ideals, worse than ever in history.

3. AI’s Role in Shaping Beauty Ideals


AI’s role in shaping beauty ideals can be viewed from three angles:
consumer - AI interactions, AI in company-consumer interactions, and
algorithmic bias in AI. Consumers and companies may integrate AI into
their digital activities with different motives, yet they both contribute to
creating and disseminating certain beauty ideals shaped by AI.
Consumer - AI interactions
Social media connected consumers all over the world. Consumers
willingly engage in social media and generate content with rational and
emotional motives of knowledge-sharing, advocacy, social connection, and
self-expression (Krishnamurthy & Dou, 2008). In visual user-generated
content, aesthetic concerns arise, and consumers want to leave a good
impression on their social connections (Ayar et al., 2025). Consumers’ desire
244 | Unrealistic Beauty Ideals: Artificial Intelligence and Consumers’ Self-Image Perceptions

to look attractive led them to use image editing tools to beautify their virtual
appearance. They used Photoshop at first, but as technology evolved, the
tools for altering images became more complex and powerful. Face filters and
VR apps have come into play. Identifying altered images with Photoshop
was not difficult for trained eyes; however, when AI was involved, it almost
became impossible to distinguish whether the content was authentic, AI-
touched, or AI-generated by the naked eye (Hashemi et al., 2024).
Consumers may use AI to generate and enhance visual content on social
media. AR face filters and VR beauty applications are AI technologies that
enable consumers to change their appearance in videos and photographs.
These AI-based technologies can be used for makeup, hair color change,
face and body touch-ups, beautification such as skin smoothing, skin tone
correction, eye and teeth whitening, slimming waist, enlarging breasts,
filling lips, etc. Large masses have quickly adopted the aforementioned
AI-based technologies, and consumers were increasingly bombarded with
more idealized and unrealistic images in social media. Although beauty
and body image-related digital activities such as image editing tools enable
consumers to alter their faces with just a few clicks, to conform to beauty
ideals of society with minimum effort (Castillo-Hermosilla et al., 2023),
to generate content for perfectionist self-presentation, constant exposure to
this kind of beauty content on social media may trigger consumers’ social
comparisons and appearance concerns (Boursier et al., 2020). In addition,
constant exposure to visual contents containing altered images with AI
blurred consumers’ perceptions of what is normal, good enough, or perfect
(MacCallum & Widdows, 2018). As consumers get used to these images,
what was exceptional and outstanding is perceived as normal, and what used
to be normal is now perceived as substandard (Kuipers, 2022).
Beauty filters negatively affect consumers’ well-being. Prior to the
widespread use of AI-face filters, consumers desired to have an appearance
like celebrities. However, there is a shift in the desire for an ideal look
from celebrities to one’s filtered self (Castillo-Hermosilla et al., 2023).
Consumers are now dreaming of resembling and looking like their filtered
appearance. This phenomenon was defined and conceptualized as Snapchat
Dysmorphia, which causes consumers to lose perspectives on their actual
appearance and poses significant risks to consumers’ mental health (Ramphul
& Mejias, 2018; Abbas & Dodeen, 2022). Figure 1 demonstrates how far
AI retouching apps can change a person’s appearance.
Feyza Nur Özkan | 245

Figure 1: Real/AI-filtered face comparison


Source: Dove (2021)

AI in company-consumer interactions
Companies may also use AI-generated content in digital marketing
communications due to its high potential to attract consumers’ interest and
cost-effectiveness. However, using AI-generated flawless faces and perfect-
looking bodies in brand communications may promote hyper-idealized
beauty standards. Companies that aim to enhance customer experience
adopted and offered VR try-on technologies, VR beauty applications, virtual
models and influencers, AR mirrors, and AR live streaming technologies.
VR try-on technologies help consumers to visualize products in a real-world
setting. It enables consumers to try makeup (see Figure 2), contact lenses,
nail polish, jewelry, and clothes-like products virtually before purchasing.
This technology even lets consumers visualize how their appearance
will change after undergoing specific plastic surgery with the AI plastic
surgery simulator. Although virtual try-on technologies improve customer
engagement and shopping satisfaction (Ajiga et al., 2024), they may also
negatively affect their perceptions of self-image.
246 | Unrealistic Beauty Ideals: Artificial Intelligence and Consumers’ Self-Image Perceptions

Figure 2: Virtual makeup try-on technology


Source: Androic (2023)

Another AI-enabled technology is virtual models and influencers.


Companies may want to employ virtual models and influencers to represent
broader consumer segments at lower costs. For example, Levi Strauss &
Co (2023) announced their partnership with [Link]. This company
specializes in creating customized AI-generated hyper-realistic models of
every body type, size, age and skin tone. One of these AI-generated virtual
models can be seen in Figure 3 as an example.

Figure 3: An AI-generated virtual model (Levi Strauss & Co. / [Link])


Source: Levi Strauss & Co (2023)
Feyza Nur Özkan | 247

Prada is another company that uses virtual models in digital marketing


communications such as Rethink Reality: Prada Candy (Prada, 2021).
Prada employed Candy (see Figure 4), an AI-generated virtual model, as
an ambassador of their “Prada Candy” fragrance. The previous ambassador
of this fragrance was a real human being, a French actress, Léa Seydoux, in
2011. As time passed, the fragrance and its target consumer group (young
women) stayed the same. However, the majority of young women are tech-
savvy Gen Z nowadays. Thus, Prada decided to change the ambassador
with a virtual model and successfully reached its target consumers (Pesonen,
2022).

Figure 4: An AI-generated virtual model (Rethink Reality: Prada Candy)


Source: Prada (2021)

LG also uses AI-based technologies. The company has its own virtual
influencer named Reah Keem, LG was first introduced her to the public in
2021 (LG, 2021). Reah identified herself as the first virtual artist in Korea
and made even her debut as a singer in 2022 (LG, 2022).
248 | Unrealistic Beauty Ideals: Artificial Intelligence and Consumers’ Self-Image Perceptions

Figure 5: A virtual influencer (Reah Keem, LG)


Source: LG (2022)

Although using AI-generated virtual models and influencers has numerous


advantages for companies, such as enhanced customer experience, better
representation of some consumer groups, cost-effectiveness, and attracting
the attention of especially young consumers, its adverse effects should also
be considered. Even though AI models and influencers are not real humans,
it is nearly impossible to differentiate from real people, especially hyper
realistic AI models (see Figure 3). Young consumers find AI models and
influencers more relatable (Pesonen, 2022), which means they will compare
themselves with these models, and this comparison may fill them up with
insecurities, and distort their self-image perceptions. Because real humans
cannot have the perfect look that virtual models and influencers always have.
AI-based technologies are not limited to VR try-on technologies, VR
beauty applications, virtual models and influencers, AR mirrors, and AR live
streaming. Some companies even use AI skin analysis to offer personalized
cosmetic and skincare recommendations tailored to the individual skin
concerns of consumers. AI skin analysis includes image processing
algorithms and deep learning models that use previously collected consumer
data for training. Algorithm bias-related issues raise concerns about the
underrepresentation of specific consumer groups and marginalize their
skin conditions, appearance, and ethnicity-specific differences (Grech et
al., 2024). This kind of AI-based technology also promotes skin-related
beauty trends in social media, such as “glass skin,” which is actually a term
used to describe glass-like smooth, flawless, clear, poreless, and shiny skin.
Feyza Nur Özkan | 249

Glass skin is hard to achieve beauty ideal that also demands a considerable
amount of time and money. Consumers trying to achieve and maintain this
look must invest in a vast amount of skincare products and follow specific
skincare routines, which takes a lot of time every day. Another risk of this
kind of skin-related beauty trend is promoting a youthful appearance among
consumers. Unrealistic expectations regarding youthful appearances may
distort consumers’ aging-related perceptions and prevent embracing the
natural beauty of women of all ages.
Algorithmic bias in AI
Algorithmic bias in AI is also a significant factor in shaping beauty ideals.
AI biases related to beauty may arise from algorithmic design, inadequate
training, or biased datasets (Grech et al., 2024). Algorithmic bias occurs
when an algorithm produces outputs that benefit or disadvantage certain
individuals or a group of consumers without justified reasoning (Kordzadeh
& Ghasemaghaei, 2022). Although recommendation systems are essential
tools for enhancing customer experience in a digital environment (Shokeen
& Rana, 2020), AI algorithms may reinforce social biases that already exist
in society (O’neil, 2016). AI algorithms learn from data. If the data used in
learning an algorithm was lacking cultural diversity, the algorithm cannot
generate suggestions including all aspects of different cultures. This may
result in the marginalization of diverse beauty representations and reinforce
harmful stereotypes. In addition, the algorithm also learns from consumers’
behavior (Sethi & Gujral, 2022). For example, recommendation systems in
social media generate content offers according to consumers’ user profile
based on their browsing behavior. If the learning data of the algorithm does
not include people who have no hair, then content offers generated by this
algorithm do not include images of hairless people.
AI algorithms are engagement-focused and designed to attract consumers’
interest. Therefore, when the algorithm is fed by similar visuals, it becomes
to recommend similar visual content over time. Then, overspecialization
may occur, and consumers become constantly exposed to similar content
that lacks diversity (Lashkari & Sharma, 2023). Accordingly, algorithmic
bias in AI may disseminate certain beauty trends and cause the idealization of
certain face and body characteristics. For example, if a consumer is interested
in makeup videos and visuals, the social media algorithm will constantly
generate makeup-related content suggestions. When the consumer keeps
engaging in suggested content, suggestions will become more particular as a
specific makeup trend over time. Another example is when the AI algorithm
is learned from the user profile as the consumer likes to see the content
250 | Unrealistic Beauty Ideals: Artificial Intelligence and Consumers’ Self-Image Perceptions

include people who have beautified and rejuvenated faces with aesthetic
surgery, then all the suggested content will include people who underwent
similar aesthetic surgeries and have similar face and body characteristics.
As exposure to those contents gets higher, consumers’ perceptions of what
is normal, good enough, or perfect may become blurred (MacCallum &
Widdows, 2018), and consumers may perceive these beauty ideals as
attainable and normal, and feel obliged to follow these aesthetic trends to
conform society’s beauty ideals.

4. AI’s Effects on Consumers’ Self-Image Perceptions


AI offers numerous advantages for companies and consumers; however,
using AI-based technologies and constant exposure to AI-generated
content, especially appearance-oriented content, may distort consumers’
perceptions of self-image. Although the present study is not an empirical
study relying on its very own data, this paper analyzes and discusses the
AI’s effects on consumers’ self-image perceptions in light of the self-theory,
social comparison theory, and the previous study findings. Therefore, this
section provides theoretical underpinnings of AI’s effects on consumers’ self-
image perceptions. In this vein, self-theory and social comparison theory are
defined and discussed in detail.

4.1. Self-Theory
Self-theory is a personality theory mainly focusing on the real self and
ideal self (Rogers, 1959). According to theory, self is people’s perceptions
related to their own characteristics, their perceptions of the relationships
with other people and life, and values attached to all these perceptions. The
ideal self is a self-concept to which a person attaches the highest value and
wants to achieve. Self-concept is a multi-dimensional concept in nature
that has various facets. The real self, defined as the actual or objective self,
indicates who the person really is; self-image, as the subjective self, refers to
how the person perceives herself/himself; the ideal self is self-actualization,
who the person would like to be; social self, defined as the way the person
thinks others regard him/her (Onkvisit & Shaw, 1987). Therefore, we can
define self-image as who consumers perceive themselves they are, and the
ideal self as who consumers really want to be.
Self-theory is a widely used theory to explain consumer behavior in the
marketing literature (e.g., Onkvisit & Shaw, 1987; Ekinci & Riley, 2003;
Kressmann et al., 2006; He & Mukherjee, 2007). Marketing research focuses
mostly on the actual self and ideal self, especially self-image congruence,
which means the cognitive match between consumers’ self-concepts (e.g.,
Feyza Nur Özkan | 251

actual self and ideal self) (Hosany & Martin, 2012). Incongruence occurs
if a discrepancy develops between the actual and ideal self. This state
is associated with tension and internal confusion, resulting in neurotic
behaviors (Rogers, 1959). Similarly, Higgins (1987) defines this concept as
self-discrepancy, which occurs when consumers compare different self-states
and find discrepancies between the two. According to him, three self-states
exist: actual self, ideal self, and ought self. Ought self indicates the attributes
a person thinks he/she is obliged to have. These self-concepts would be the
person’s own perspective or the perspectives of significant others (Vartanian,
2012).
According to self-discrepancy, six types of self-concepts could be
experienced: actual/own, actual/other, ideal/own, ideal/other, ought/
own, and ought/other (Higgins, 1987). These self-state representations
are important due to their motivational significance. When this theory is
applied to consumers’ beauty and body image perceptions context, actual/
own indicates consumers’ own perceptions of their body, ideal/own refers to
how consumers think their body would ideally like to be, ideal/own indicates
consumers’ internalization of society’s beauty ideals. Consumers’ discrepancy
perceptions between these self-states may cause serious problems such as
dissatisfaction, depression, anxiety, and guilt (Vartanian, 2012). According
to MacCallum and Widdows (2018), if the discrepancy between the actual
self and the ideal self occurs, consumers feel disappointment and sadness. In
addition, the discrepancy between the actual self and the ought self leads to
anxiety and guilt.
Appearance-oriented content may affect consumers’ ideal self-perceptions
and lead to appearance-changing behaviors such as eating disorders by
causing actual-ideal discrepancy (Grogan, 2007). In addition, exposure
to idealized beauty images may also heighten the discrepancy between the
actual and the ought self and may cause restricted eating in public, to be
perceived as a person who is trying to conform to society’s appearance-
related expectations (Hefner et al., 2014; MacCallum & Widdows,
2018). Consumers’ food choices may also be affected. In order to give the
impression to other people as someone trying to achieve a healthy, fit, and
thin appearance, consumers may prefer lower calorie foods in the presence of
others. In addition, perceived self-discrepancy between the actual and ideal
appearance may cause serious health concerns such as disordered eating,
depression, body surveillance, and body dysmorphic disorder (Castillo-
Hermosilla et al., 2023).
252 | Unrealistic Beauty Ideals: Artificial Intelligence and Consumers’ Self-Image Perceptions

4.2. Social Comparison Theory


Social comparison theory focuses on the idea that people evaluate their
abilities and opinions according to outside images (Festinger, 1954). As the
name suggests, social comparison theory indicates an inner drive of people
to evaluate and understand themselves in comparison with similar others in
a social environment. In today’s digital age, this comparison is stronger than
ever with the widespread use of social media platforms and the increasing
use of AI. Prior to the advent and adoption of web-based technologies,
consumers’ social surroundings were limited to friends, colleagues,
neighbors, and the people they met physically. However, when social
media rises, consumers are suddenly exposed to people’s lives, appearances,
experiences, and thoughts, even from the distant corners of the world. Then,
AI came into our lives, and social comparison extends to virtual humans
who do not even exist.
Comparison motives of consumers include evaluation, improvement, and
enhancement (Gibbons & Buunk, 1999). Social comparison theory helps us
understand the motivations of consumers’ self-evaluation and improvement
and how these motivations shape consumer behavior (Caliskan et al.,
2024). According to the theory, comparisons can be made in upward and
downward directions (Wills, 1981). The upward comparison refers to
consumers’ comparison of themselves with superior others; however, the
downward comparison indicates comparison with inferiors. AI-generated
beauty images and beauty-related content cause upward comparison because
these contents are entirely of perfect-looking images that promote and
disseminate unrealistic and unattainable beauty ideals. Upward comparison
of consumers may result in inadequacy, and envy (Caliskan et al., 2024),
and body dissatisfaction when the comparison is made with idealized media
images (Tiggemann & Polivy, 2010).
According to Wood (1996), social comparison does not have to be
careful or even conscious thought. Comparisons can be made with an image,
a person, or a group of people in relation to the self. Therefore, consumers
may unconsciously make social comparisons when constantly exposed to AI
beauty content on social media. People do not tend to compare themselves
with others who have distinctively different abilities or opinions, according
to Festinger (1954). When this theory is applied to body image perceptions
and ideal beauty context, people do not usually compare themselves with
supermodels because the difference is extremely divergent from their own.
However, a friend using AI filters, a virtual influencer, or an AI-generated
better version of himself/herself can be easily subjected to comparison.
Feyza Nur Özkan | 253

Comparisons may also occur when the attractions of the compared groups
are strong. Another reason for comparisons is consumers’ desire to stay
as a member of a specific group (Festinger, 1954). When considering
the advantages of being a member of an attractive group and the adverse
effects of not conforming to beauty ideals, comparison is almost inevitable,
especially for young women (Yan & Bissell, 2014).
Social comparison research in consumers’ beauty perception context
demonstrates that women’s self-image perceptions negatively change when
they perceive the comparison target as extremely attractive (Birkeland et al.,
2005; Brown et al., 1992). According to Yan and Bissell (2014), constant
exposure to appearance-oriented content, such as extremely thin and
unrealistically perfect-looking bodies, is even more destructive for young
women. This kind of content may lower consumers’ self-esteem and cause
body image dissatisfaction, eating disorders, and depression (Harrison &
Cantor, 1997; Lavine et al., 1999).

5. AI’s Consumption-Related Effects


AI’s adverse effects on consumers’ self-image perceptions may cause
serious health problems as discussed in the previous sections. In addition,
AI’s adverse effects on consumers’ self-image perceptions may also have
consumption-related effects. Social comparison theory suggests that
consumers try to eliminate the perceived discrepancy when a difference
between desired others and perceived self is recognized (Yan & Bissell,
2014). Mandel et al. (2017) introduced compensatory consumer behavior
model to explain consumer behaviors in regulating self-discrepancies.
The compensatory consumer behavior model suggests direct resolution,
symbolic self-completion, dissociation, escapism, and fluid compensation
as consumers’ strategies for coping with self-discrepancies. In addition to
these strategies, this model also suggests that consumption may reduce self-
discrepancies.
AI-enhanced visuals include face and body touch-ups, such as face
filters changing face features and putting desired makeup on consumers’
faces. When consumers are exposed to these AI-enhanced visuals of others
without any disclosure of AI use, they cannot define whether the visual
is real or AI-enhanced. If AI-generated visuals are known to be artificial,
consumers would no longer perceive the image as a comparison target.
Therefore, awareness of digital enhancement is expected to cause less social
comparisons and less body dissatisfaction (MacCallum & Widdows, 2018).
Otherwise, consumers may perceive the content as real and feel insecure
254 | Unrealistic Beauty Ideals: Artificial Intelligence and Consumers’ Self-Image Perceptions

about their appearance. In order to reduce this self-discrepancy, consumers’


demand for cosmetics, skincare, and beauty products is increasing (Fardouly
et al., 2015; Grech et al., 2024; Singer & Papadopoulos, 2024). To reach
the desired appearance, consumers’ demand for fitness foods, beauty and
dietary supplements, and even aesthetic surgery heightens (Yan & Bissell,
2014; Nobile et al., 2023; Krywuczky & Kleijnen, 2024). Although
increased demand for beauty products is favorable for the beauty industry,
it has detrimental effects on consumers and society. It raises concerns about
health issues related to excessive use of skincare or cosmetic products and
appearance-related concerns due to the excessive demand for plastic surgery
for similar esthetic procedures that transform people into look-alike masses.

6. Conclusion
AI has an undeniably significant role in shaping beauty ideals in today’s
digital age. Even though AI offers countless benefits, there is a dark side of
AI that involves risks. AI’s role in shaping beauty ideals can be threefold:
consumer – AI interactions, AI in company-consumer interactions,
and algorithmic bias in AI. AR face filters, VR beauty applications, VR
try-on technologies, virtual models and influencers, AR mirrors, AR
livestreaming technologies, AI skin analysis, and algorithmic bias constitute
the tools, technologies, and characteristics of AI shaping, perpetuating, and
disseminating beauty ideals.
Consumers’ discrepancy between the actual self and the ideal self increases
when they are constantly exposed to AI-generated beauty content. While
these contents damage their self- image perceptions, they also promote
almost perfect and unattainable beauty ideals. Even though social comparison
theory suggests that when a comparison target is considered irrelevant to the
present status, it will not affect the self (Strahan et al., 2006), consumers’
perceived difference is not that high from the comparison target when AI
is involved. Constant exposure to AI-generated or AI-enhanced visuals
causes constant observation of numerous comparison targets, normalizes
the perfect-looking bodies, and makes them seem attainable over time. In
addition, AI itself also decreases perceived differences from the comparison
target because the comparison target is not usually a supermodel; instead,
a friend using AI filters, or even consumers’ an AI-powered better version
of themselves. Therefore, beauty content’s adverse effects on consumers are
more serious and dangerous than ever before.
The detrimental effects of AI-generated or AI-enhanced visual content
on consumers are not limited to the distortion of self-image perceptions.
Feyza Nur Özkan | 255

It also raises serious health and consumption-related concerns. Health-


related effects include eating disorders, restricted eating in public,
body dissatisfaction, body surveillance, body dysmorphic disorder, and
depression. Besides, consumption-related effects include increasing demand
for cosmetics, skincare, beauty products, fitness foods and products, beauty
and dietary supplements, and even aesthetic surgery.
Even though serious risks are involved, AI is a widely used technology in
almost every sphere of life, and we should find a way to minimize the risks
while enjoying its advantages. Consumers and companies may integrate AI
for their digital activities with different motives; however, considering its
consequences is the responsibility of every actor in the marketing environment
for the welfare of society. Although AI offers various tools and technologies
to make life easier for everyone, responsible use and implementation are
key for consumer protection and preventing possible harm. Transparency
is also important. Consumers should know whether AI is used to generate
or enhance visual content. Awareness of digital enhancement is expected
to prevent consumers from unnecessary and irrelevant social comparisons
to achieve unattainable and unrealistic beauty ideals. In addition, diversity
in beauty should be protected and encouraged for a more inclusive society.
Broadening our perspectives of beauty by embracing the beauty of all
ages, all body shapes, all skin types, and colors would help to build a more
empathetic world.
256 | Unrealistic Beauty Ideals: Artificial Intelligence and Consumers’ Self-Image Perceptions

References
Abbas, L., & Dodeen, H. (2022). Body dysmorphic features among Snapchat
users of “Beauty-Retouching of Selfies” and its relationship with quality
of life. Media Asia, 49(3), 196-212.
Ajiga, D. I., Ndubuisi, N. L., Asuzu, O. F., Owolabi, O. R., Tubokirifuruar, T.
S., & Adeleye, R. A. (2024). AI-driven predictive analytics in retail: A re-
view of emerging trends and customer engagement strategies. Internati-
onal Journal of Management & Entrepreneurship Research, 6(2), 307-321.
Alam, M., & Dover, J. S. (2001). On beauty: Evolution, psychosocial con-
siderations, and surgical enhancement. Archives of Dermatology, 137(6),
795-807.
Alden, D. L., Steenkamp, J. B. E., & Batra, R. (1999). Brand positioning th-
rough advertising in Asia, North America, and Europe: The role of glo-
bal consumer culture. Journal of Marketing, 63(1), 75-87.
Ameen, N., Tarhini, A., Reppel, A., & Anand, A. (2021). Customer experien-
ces in the age of artificial intelligence. Computers in Human Behavior, 114,
106548.
Ando, K., Giorgianni, F. E., Danthinne, E. S., & Rodgers, R. F. (2021). Bea-
uty ideals, social media, and body positivity: A qualitative investigation
of influences on body image among young women in Japan. Body Image,
38, 358-369.
Androic, I. (2023). How virtual makeover technology helps brands sell re-
al-world makeup. Retrieved February 24, 2025 from [Link]
[Link]/virtual-makeover-technology/
Ayar, D., Aksu, Ç., Polat, F., & Elkoca, A. (2025). The effects of popularity
perceptions and social appearance anxiety on the desire of young people
to have aesthetic procedures on social media. Current Psychology, 1-10.
Barari, M., Casper Ferm, L. E., Quach, S., Thaichon, P., & Ngo, L. (2024).
The dark side of artificial intelligence in marketing: Meta-analytics re-
view. Marketing Intelligence & Planning, 42(7), 1234-1256.
Bell, B. T., Taylor, C., Paddock, D., & Bates, A. (2022). Digital bodies: A cont-
rolled evaluation of a brief classroom-based intervention for reducing ne-
gative body image among adolescents in the digital age. British Journal of
Educational Psychology, 92(1), 280-298.
Birkeland, R., Thompson, J. K., Herbozo, S., Roehrig, M., Cafri, G., & Van
den Berg, P. (2005). Media exposure, mood, and body image dissatisfac-
tion: An experimental test of person versus product priming. Body Image,
2(1), 53-61.
Boursier, V., Gioia, F., & Griffiths, M. D. (2020). Do selfie-expectancies and
social appearance anxiety predict adolescents’ problematic social media
use? Computers in Human Behavior, 110, 106395.
Feyza Nur Özkan | 257

Brown, J. D., Novick, N. J., Lord, K. A., & Richards, J. M. (1992). When Gul-
liver travels: Social context, psychological closeness, and self-appraisals.
Journal of Personality and Social Psychology, 62(5), 717-729.
Caliskan, F., Idug, Y., Uvet, H., Gligor, N., & Kayaalp, A. (2024). Social com-
parison theory: A review and future directions. Psychology & Marketing,
41(11), 2823-2840.
Castillo-Hermosilla, M. P., Tayebi-Jazayeri, H., & Williams, V. N. (2023). Bre-
aking the filtered lens: A feminist examination of beauty ideals in aug-
mented reality filters. In EAI International Conference on AI for People,
Democratizing AI (pp. 95-101). Cham: Springer Nature Switzerland.
Cleveland, M., & Laroche, M. (2007). Acculturation to the global consumer
culture: Scale development and research paradigm. Journal of Business Re-
search, 60(3), 249-259.
Dayan, S. H. (2011). What is beauty, and why do we care so much about it?
Archives of Facial Plastic Surgery, 13(1), 66-67.
Dimitrov, D., & Kroumpouzos, G. (2023). Beauty perception: A historical and
contemporary review. Clinics in Dermatology, 41(1), 33-40.
Dove. (2025). The Selfie Talk: Social media & self-esteem. Retrieved February
25, 2025 from [Link]
[Link]
Ekinci, Y., & Riley, M. (2003). An investigation of self-concept: Actual and ide-
al self-congruence compared in the context of service evaluation. Journal
of Retailing and Consumer Services, 10(4), 201-214.
Fardouly, J., Diedrichs, P. C., Vartanian, L. R., & Halliwell, E. (2015). Social
comparisons on social media: The impact of Facebook on young wo-
men’s body image concerns and mood. Body Image, 13, 38-45.
Festinger, L. (1954). A theory of social comparison processes. Human Relati-
ons, 7(2), 117-140.
Fioravanti, G., Bocci Benucci, S., Ceragioli, G., & Casale, S. (2022). How
exposure to beauty ideals on social networking sites influences body ima-
ge: A systematic review of experimental studies. Adolescent Research Re-
view, 7, 419-458.
Frederick, D., Forbes, M., Jenkins, B., Reynolds, T., & Walters, T. (2015).
Beauty standards. The International Encyclopedia of Human Sexuality, 1,
113-196.
Georgievskaya, A., Tlyachev, T., Danko, D., Chekanov, K., & Corstjens, H.
(2025). How artificial intelligence adopts human biases: The case of cos-
metic skincare industry. AI and Ethics, 5, 105-115.
Gibbons, F. X., & Buunk, B. P. (1999). Individual differences in social compa-
rison: Development of a scale of social comparison orientation. Journal of
Personality and Social Psychology, 76(1), 129-142.
258 | Unrealistic Beauty Ideals: Artificial Intelligence and Consumers’ Self-Image Perceptions

Grech, V. S., Kefala, V., & Rallis, E. (2024). Cosmetology in the era of artificial
intelligence. Cosmetics, 11(4), 135.
Grewal, D., Benoit, S., Noble, S. M., Guha, A., Ahlbom, C. P., & Nordfält, J.
(2023). Leveraging in-store technology and AI: Increasing customer and
employee efficiency and enhancing their experiences. Journal of Retailing,
99(4), 487-504.
Grogan, S. (2007). Body image: Understanding body dissatisfaction in men, women
and children (2nd ed.). Hove: Routledge.
Harrison, K., & Cantor, J. (1997). The relationship between media consumpti-
on and eating disorders. Journal of Communication, 47(1), 40-67.
Hashemi, A., Shi, W., & Corriveau, J. P. (2024). AI-generated or AI touch-up?
Identifying AI contribution in text data. International Journal of Data
Science and Analytics, 1-12.
He, H., & Mukherjee, A. (2007). I am, ergo I shop: Does store image congru-
ity explain shopping behaviour of Chinese consumers? Journal of Marke-
ting Management, 23(5-6), 443-460.
Hefner, V., Woodward, K., Figge, L., Bevan, J. L., Santora, N., & Baloch, S.
(2014). The influence of television and film viewing on midlife women’s
body image, disordered eating, and food choice. Media Psychology, 17(2),
185-207.
Higgins, E. T. (1987). Self-discrepancy: A theory relating self and affect. Psycho-
logical Review, 94(3), 319-340.
Hosany, S., & Martin, D. (2012). Self-image congruence in consumer behavi-
or. Journal of Business Research, 65(5), 685-691.
Keillor, B. D., D’Amico, M., & Horton, V. (2001). Global consumer tenden-
cies. Psychology & Marketing, 18(1), 1-19.
Kordzadeh, N., & Ghasemaghaei, M. (2022). Algorithmic bias: Review, sy-
nthesis, and future research directions. European Journal of Information
Systems, 31(3), 388-409.
Kressmann, F., Sirgy, M. J., Herrmann, A., Huber, F., Huber, S., & Lee, D.
J. (2006). Direct and indirect effects of self-image congruence on brand
loyalty. Journal of Business Research, 59(9), 955-964.
Krishnamurthy, S., & Dou, W. (2008). Note from special issue editors: Adver-
tising with user-generated content: A framework and research agenda.
Journal of Interactive Advertising, 8(2), 1-4.
Krywuczky, F., & Kleijnen, M. (2024). Consumer decision-making in cosmetic
surgery: An interdisciplinary review identifying key challenges and impli-
cations for marketing theory. Psychology & Marketing, 41(12), 3182-3201.
Feyza Nur Özkan | 259

Kuipers, G. (2022). The expanding beauty regime: Or, why it has become
so important to look good. Critical Studies in Fashion & Beauty, 13(2),
207-228.
Lashkari, S., & Sharma, S. (2023). Recommender systems and artificial intelli-
gence in digital marketing. In 2022 OPJU International Technology Con-
ference on Emerging Technologies for Sustainable Development (OTCON)
(pp. 1-8). IEEE.
Laughter, M. R., Anderson, J. B., Maymone, M. B., & Kroumpouzos, G.
(2023). Psychology of aesthetics: Beauty, social media, and body dys-
morphic disorder. Clinics in Dermatology, 41(1), 28-32.
Lavine, H., Sweeney, D., & Wagner, S. H. (1999). Depicting women as sex ob-
jects in television advertising: Effects on body dissatisfaction. Personality
and Social Psychology Bulletin, 25(8), 1049-1058.
Levi Strauss & Co. (2023). LS&Co. Partners with [Link]. Retrieved
February 25, 2025 from [Link]
lsco-partners-with-lalaland-ai/
Lewallen, J., & Behm-Morawitz, E. (2016). Pinterest or Thinterest?: Social
comparison and body image on social media. Social Media + Society, 2(1),
1-9.
LG. (2021). Getting real with virtual influencer Reah Keem. Retrieved
February 25, 2025 from [Link]
getting-real-with-virtual-influencer-reah-keem/
LG. (2022). Virtual artist Reah Keem ready to take the stage. Retrieved
February 25, 2025 from [Link]
virtual-artist-reah-keem-ready-to-take-the-stage/
Mabry-Flynn, A., & Champlin, S. (2018). Leave a comment: Consumer respon-
ses to advertising featuring “real” women. In Mediating Misogyny: Gen-
der, Technology, and Harassment (pp. 229-245). Springer.
MacCallum, F., & Widdows, H. (2018). Altered images: Understanding the
influence of unrealistic images and beauty aspirations. Health Care Analy-
sis, 26, 235-245.
Mandel, N., Rucker, D. D., Levav, J., & Galinsky, A. D. (2017). The compen-
satory consumer behavior model: How self-discrepancies drive consumer
behavior. Journal of Consumer Psychology, 27(1), 133-146.
Nobile, V., Schiano, I., Germani, L., Cestone, E., Navarro, P., Jones, J., &
Caturla, N. (2023). Skin anti-aging efficacy of a four-botanical blend
dietary ingredient: A randomized, double blind, clinical study. Cosmetics,
10(1), 16.
O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality
and threatens democracy. Crown Publishing Group.
260 | Unrealistic Beauty Ideals: Artificial Intelligence and Consumers’ Self-Image Perceptions

Onkvisit, S., & Shaw, J. (1987). Self-concept and image congruence: Some re-
search and managerial implications. Journal of Consumer Marketing, 4(1),
13-23.
Pesonen, L. (2022). How Prada Candy and its digital muse is changing the
fashion and beauty landscape. Retrieved February 25, 2025 from htt-
ps://[Link]/articles/how-prada-candy-and-its-digi-
tal-muse-is-changing-the-fashion-and-beauty-landscape
Prada. (2021). Rethink Reality: Prada Candy. Retrieved February 25, 2025
from [Link]
da-candy/[Link]
Ramphul K. & Mejias S. G. (2018). Is “Snapchat Dysmorphia” a Real Issue?.
Cureus, 10(3), e2263.
Rogers, C. R. (1959). A theory of therapy, personality, and interpersonal re-
lationships: As developed in the client-centered framework. In S. Koch
(Ed.), Psychology: A study of a science (Vol. 3, pp. 184-256). New York:
McGraw-Hill.
Sethi, V., & Gujral, R. K. (2022). Survey of different recommendation systems
to improve the marketing strategies on e-commerce. In 2022 Internatio-
nal Conference on Machine Learning, Big Data, Cloud and Parallel Compu-
ting (COM-IT-CON) (Vol. 1, pp. 119-125). IEEE.
Shokeen, J., & Rana, C. (2020). Social recommender systems: Techniques, do-
mains, metrics, datasets and future scope. Journal of Intelligent Informati-
on Systems, 54(3), 633-667.
Singer, R., & Papadopoulos, T. (2024). There is no universal standard of bea-
uty. Aesthetic Plastic Surgery, 48(24), 5273-5282.
Strahan, E. J., Wilson, A. E., Cressman, K. E., & Buote, V. M. (2006). Com-
paring to perfection: How cultural norms for appearance affect social
comparisons and self-image. Body Image, 3(3), 211-227.
Struckman-Johnson, C., & Struckman-Johnson, D. (1994). Men’s reactions to
hypothetical female sexual advances: A beauty bias in response to sexual
coercion. Sex Roles, 31, 387-405.
Tiggemann, M., & Polivy, J. (2010). Upward and downward: Social compari-
son processing of thin idealized media images. Psychology of Women Qu-
arterly, 34(3), 356-364.
Vartanian, L. R. (2012). Self-discrepancy theory and body image. In T. F. Cash
(Ed.), Encyclopedia of body image and human appearance (Vol. 2, pp. 711-
717). Elsevier Academic Press.
We Are Social & Meltwater. (2025). Digital 2025 Global Overview Report. Retrie-
ved February 24, 2025 from [Link]
digital-2025-the-essential-guide-to-the-global-state-of-digital/
Feyza Nur Özkan | 261

Wills, T. A. (1981). Downward comparison principles in social psychology.


Psychological Bulletin, 90(2), 245-271.
Wong, C. H., Wu, W. T., & Mendelson, B. (2021). Invited discussion on:
What is beauty? Aesthetic Plastic Surgery, 45(5), 2177-2179.
Wood, J. V. (1996). What is social comparison and how should we study it?
Personality and Social Psychology Bulletin, 22(5), 520-537.
Xie, Z. (2024). The influence of social media on perception of body image and
beauty standards on young people. Transactions on Social Science, Educati-
on and Humanities Research, 5, 143-148.
Yan, Y., & Bissell, K. (2014). The globalization of beauty: How is ideal beauty
influenced by globally published fashion and beauty magazines? Journal
of Intercultural Communication Research, 43(3), 194-214.
262 | Unrealistic Beauty Ideals: Artificial Intelligence and Consumers’ Self-Image Perceptions
Chapter 15

Artificial Intelligence Marketing (AIM): Digital


Transformation and Consumer Behaviour

Ahmet Songur1

Abstract
Artificial intelligence (AI) is important in analyzing consumer behavior
and creating marketing strategies based on this behavior. AI facilitates the
analysis of large amounts of data, providing deep insights into consumer
preferences. In this way, companies can increase customer satisfaction by
offering personalized experiences that align with consumer expectations.
Through machine learning and data analytics, consumer behavior can be
better understood, increasing the effectiveness of marketing campaigns.
AIM’s ability to analyze data such as social media interactions, purchase
history, and online behavior makes it possible to predict consumers’ future
buying tendencies. This enables marketers to target and quickly adapt to
consumer behavior accurately. In addition, the personalization capabilities
offered by AIM increase brand loyalty and strengthen purchase intent by
providing consumers with personalized experiences. These benefits of AIM in
understanding consumer behavior allow marketers to develop more effective
and targeted strategies. The ability of technology to optimize marketing
processes by responding to customer needs in real time allows brands to
gain a competitive advantage. AIM has great potential to predict consumer
behavior and create personalized marketing campaigns more accurately.

1. Introduction
The impact of artificial intelligence (AI) is becoming increasingly
widespread, radically changing global marketing dynamics (Jain &
Aggarwal, 2020). This technology, which transforms how businesses
operate, holds significant potential for innovation in marketing. Identifying
the most suitable AI solutions for marketing processes is a critical focus
for practitioners. AI is expected to become an organization’s key business

1 Doktor Öğretim Üyesi, Süleyman Demirel Üniversitesi İktisadi ve İdari Bilimler Fakültesi
İşletme Bölümü, ahmetsongur@[Link], Orcid No: 0000-0002-9869-5394

[Link]
263
264 | Artificial Intelligence Marketing (AIM): Digital Transformation and Consumer Behaviour

partner in the long term. AI applications have already become a core element
of global marketing teams. In interviews with 100 senior marketers across
various industries, it was found that 55 percent of companies are either using
or actively researching AI-based marketing applications (Smartinsights,
2018). This finding underscores the transformational impact of AI on the
marketing industry.
Artificial Intelligence Marketing (AIM) refers to the effective
application of technology to enhance the customer experience. Artificial
intelligence facilitates big data analysis, enabling personalized sales strategies
and better alignment with customer expectations. Additionally, AIM has
the potential to improve the performance and return on investment (ROI)
of marketing campaigns by providing rapid and in-depth customer insights
(Jain & Aggarwal, 2020).
In the marketing landscape, big data analytics and artificial intelligence
applications are becoming increasingly crucial. By leveraging machine
learning technologies, marketers analyze the relationships between data
points to gain deep insights into customer behavior and enhance operational
efficiency. These systems can detect emotions by analyzing speech, visualize
social media trends, and make various predictions through data processing
(Thiraviyam, 2018).
AI holds significant potential in the field of marketing, fundamentally
transforming the interaction between brands and consumers. The
application of AI varies based on the type of business and the characteristics
of the website. The data generated by AI allows for the rapid and effective
identification of content and channel preferences within the target audience,
enabling marketers to meet customer needs in real time. Furthermore, the
personalization capabilities offered by AI provide a sustainable competitive
advantage by analyzing the performance of competing businesses while
enhancing consumers’ purchasing tendencies (Haleem et al., 2022).
New technologies offer a competitive advantage to businesses by making
their product and service offerings more accessible to customers (Rouhani et
al., 2016). A customer-centered approach that focuses on addressing customer
needs on a global scale is essential for organizational growth (Vetterli et al.,
2016). AI plays a significant role, particularly in digital marketing. Through
tools such as chatbots, intelligent email marketing, interactive web design,
and other digital marketing technologies, AI helps guide users in alignment
with business objectives, providing a more personalized experience. These
technologies optimize customer interactions, making marketing strategies
more effective for businesses.
Ahmet Songur | 265

With the advancement of AI, artificial intelligence applications in


marketing have become increasingly important for enhancing customer
satisfaction, expanding market share, and boosting profitability. In this
context, key questions arise regarding how AI technologies can be most
effectively integrated into marketing strategies and what future research
directions will emerge. Artificial intelligence offers marketers significant
opportunities by providing advantages such as the ability to analyze customer
behavior, deliver personalized experiences, and accelerate decision-making
processes. Future research will focus on exploring more in-depth applications
of AI, examining its impact on marketing strategies, and investigating ways
to integrate these technologies more sustainably and ethically (Wisetsri et
al., 2021).

2. Artificial Intelligence (AI) Concept


In artificial intelligence (AI), the term ‘artificial’ refers to the capacity
of machines to function independently of human intervention, while the
concept of ‘intelligence’ is more nuanced and complex (Wirth, 2018).
Alan Turing addressed this issue in his 1950 paper, Computing Machinery
and Intelligence, which preceded John McCarthy’s Dartmouth Artificial
Intelligence Research Project in 1956. In his discussion of the question “Can
machines think?”, Turing emphasized the need to first define the concepts
of ‘machine’ and ‘thinking’ (Turing, 1950). The Turing test is designed to
assess the intelligence of computers and determine whether a machine can
achieve human-level performance across all cognitive tasks (Thiraviyam,
2018). In 1959, Cahit Arf addressed the issue in his Public Conference
Declaration at Atatürk University in Erzurum under the title ‘Can a machine
think and how can it think? (Arf, 1959).
Artificial intelligence, from an idealized perspective, can be seen as an
artificial operating system that demonstrates the higher cognitive functions
or autonomous behaviors typically associated with human intelligence. This
system should be capable of perceiving, learning, linking multiple concepts,
thinking, reasoning, problem-solving, communicating, and making decisions.
Additionally, such an AI system should be able to generate responses based
on its reasoning (agentive artificial intelligence) and physically express these
reactions (Wikipedia, 2025).
Wirth (2018) defines AI in three distinct categories: narrow AI, hybrid
AI, and strong AI. Narrow AI, also known as weak AI, refers to systems
optimized for specific tasks, with limited flexibility and an inability to adapt
to different domains. Despite this limitation, the development of narrow
266 | Artificial Intelligence Marketing (AIM): Digital Transformation and Consumer Behaviour

AI systems is highly complex. While these systems lack the broad cognitive
capabilities of human intelligence, they can excel in their areas of expertise
and even surpass humans in some cases. Notable examples include AlphaGo
and DeepBlue. The majority of AI systems in use today fall under the narrow
AI category, such as Siri, Google Assistant, and Alexa. Narrow AI is widely
applied across industries like healthcare, defense, and marketing. The terms
strong AI, full AI, and general artificial intelligence are used interchangeably
to describe systems that possess the same level of power and flexibility as
human intelligence. Unlike narrow AI, strong AI is not designed for specific
tasks but aims to replicate human cognitive capabilities. However, strong AI
has not yet been realized (Greenwald, 2011). The development of narrow AI
has highlighted the need for more precise terminology. Emerging solutions
that integrate multiple narrow AI systems to address a broader range of tasks
are referred to as hybrid AI. This field is rapidly expanding, though these
systems still do not qualify as strong AI (Martínez de Pisón et al., 2017).
In today’s rapidly evolving digital landscape, data-driven strategies are
becoming increasingly vital in marketing. In this context, technologies such
as Big Data and Machine Learning are transforming marketing practices. Big
Data refers to the process of collecting extensive data on customers’ buying
behaviors and trends. It also involves the ability of marketers to effectively
combine and analyze large data sets. This data is leveraged in marketing
strategies to ensure that the right message reaches the right person at the
right time and through the appropriate channel. Machine Learning, on
the other hand, enables the creation and application of models based on
identified patterns. This technology provides the opportunity to uncover
trends, analyze insights, and predict behaviors by extracting valuable
information from large data sets. As a result, marketers can optimize their
strategic decisions by better assessing the likelihood of certain actions being
repeated and understanding the key factors influencing these processes (Jain
& Aggarwal, 2020).
In the realm of artificial intelligence and machine learning, few topics
are as intriguing as generative models. These models stand out for their
ability to generate new data that closely resembles real-world examples.
Generative models tackle one of the most complex challenges in AI: creating
new data that is indistinguishable from authentic data. They can generate
realistic images, music, or text without human intervention. Among
these models, Generative Adversarial Networks (GANs) and Variational
Autoencoders (VAEs) are particularly notable architectures. Generative
Adversarial Networks (GANs), developed by Ian Goodfellow and his team
in 2014, consist of two opposing neural networks: the Generator and the
Ahmet Songur | 267

Discriminator (Goodfellow, 2014). The Generator creates realistic data


samples starting from random noise, while the Discriminator analyzes the
generated data and attempts to distinguish between real and fake samples.
These two networks continuously challenge each other— as the Generator
produces more convincing fake data, the Discriminator becomes increasingly
adept at detecting it. This adversarial process enables the Generator to
create samples so realistic that they are almost indistinguishable from real
data (Alqahtani et al., 2021). Variational Autoencoders (VAEs) enhance
the dimensionality reduction and feature learning capabilities of traditional
Autoencoders by introducing a probabilistic framework for generative
modeling. In this model, the Encoder maps the input data into a low-
dimensional latent space, creating a probabilistic representation, while the
Decoder reconstructs the original data by sampling from this latent space
(Wei et al., 2020). The key feature of VAEs is that they treat the latent
space as a probability distribution, allowing for more flexible and controlled
data generation. This probabilistic approach provides a statistical structure
that enables the model to generate new data samples. Generative models
have found widespread applications across various fields, including art
production, anomaly detection, drug discovery, super-resolution image
processing, and data transfer (Sruthy, 2023).

3. AIM and Consumer Behavior


AI is a technology that enables businesses to monitor real-time data,
allowing them to quickly analyze and respond to customer needs (Wirth,
2018). Marketers can use AI to assess consumer behavior, identify patterns,
predict future outcomes, and adjust advertising strategies accordingly.
AI offers significant advantages for marketers, particularly in forecasting
consumer behavior and enhancing customer satisfaction. As consumer
preferences evolve, brands are increasingly investing in AI-powered solutions
to maintain a competitive edge. AI tools are being effectively applied in
areas such as web metrics analysis, optimization of reach, and conversion
strategies. AI branches, including machine learning, natural language
processing, expert systems, robotics, and data analytics, help marketers
classify customer needs, personalize demand, and improve sales forecasts.
AI-powered ‘intelligent’ systems are designed to boost customer loyalty and
sales performance while reducing uncertainties in decision-making processes
(Gkikas & Theodoridis, 2022).
Traditionally, forecasting consumer behavior relied on statistical
techniques and rule-based systems. While these methods can be useful to
some extent, contemporary consumer markets are increasingly analyzed
268 | Artificial Intelligence Marketing (AIM): Digital Transformation and Consumer Behaviour

using generative artificial intelligence (AI) methods, which have significantly


enhanced the accuracy and depth of consumer behavior predictions. By
utilizing large datasets and complex machine learning algorithms, these
models can uncover hidden trends and relationships that traditional methods
often overlook. For example, Generative Adversarial Networks (GANs)
create realistic customer profiles based on existing data, enabling marketers
to simulate how new products or services will be perceived by different
populations (Prosvetov, 2019). Similarly, Variational Autoencoders (VAEs)
analyze the latent variables behind consumer preferences, providing a more
comprehensive understanding of individual choices and how these decisions
are influenced by external factors (Higgins et al., 2017).
The increasing availability of consumer data and artificial intelligence
(AI) systems developed for large-scale processing of this data has accelerated
data-driven decision-making processes in marketing. Generative AI models
provide an effective tool to improve traditional consumer behavior models
by offering more accurate and complex insights. Transformative models in
particular have shown great success in recommender systems. For example,
they are notable for their ability to accurately predict future behavior and
preferences by analyzing customer contact data (Yenduri et al., 2024; Gupta
et al., 2024; Yoon & Jang, 2023).
The use of generative artificial intelligence (AI) in marketing has created
new opportunities and influenced consumer behavior. These technologies
enable companies to create customized marketing plans, evaluate big customer
data more effectively, and gain new insights. Personalized marketing, in
particular, is an important area where generative AI is having an impact.
While traditional methods rely on rule-based algorithms that group users
into groups, generative AI can create unique customer profiles and detect
small trends by analyzing behavioral data. By modeling how consumers react
to marketing stimuli, GANs enable companies to deliver more accurately
personalized messages. This level of personalization can increase consumer
engagement and loyalty (Gavilanes et al., 2018; Harmeling et al., 2017).
Furthermore, transducers can be used to predict consumer behavior. For
example, in e-commerce, a model can provide timely and relevant product
recommendations by predicting a consumer’s future purchases. Generative
AI models are also effective in sentiment analysis and recognition and play
an important role in understanding factors such as customer satisfaction and
dissatisfaction. This information can be used to improve customer service
and build stronger consumer relationships (Higgins, et. al., 2017).
Ahmet Songur | 269

Madanchian (2024) examines the impact of AI models on consumer


behavior prediction, marketing, and customer engagement. By systematically
analyzing 31 studies across areas such as e-commerce, energy data modeling,
and public health, he identifies the contributions of these models to
personalized marketing, inventory management, and customer retention.
The study highlights the ability of transformative models to process complex
data, as well as the advantages of certain AI models (e.g., GAN and VAE)
in predicting customer behavior. Additionally, challenges related to data
privacy, computing resources, and the application of these models in real-
world scenarios are discussed.

4. Use of Artificial Intelligence in Marketing


The primary applications of AI in marketing encompass key marketing
mix elements, including strategy and planning, product development,
pricing, distribution, and promotion. The use of AI-based systems in these
areas is strategically significant (Han et al., 2021).

Figure 1. Several Segments for AI Applications in the Marketing Domain


Source: Haleem et al., 2022: p. 121.
270 | Artificial Intelligence Marketing (AIM): Digital Transformation and Consumer Behaviour

4.1. Strategy and Planning


AI can assist marketers in determining the strategic direction of a
company (Huang & Rust, 2017). Additionally, AI supports marketers in
segmentation, targeting, and positioning, enabling them to develop effective
marketing strategies and plan their activities. AI applications are particularly
useful in identifying profitable customer segments across various industries,
especially in retail. By combining data optimization techniques, machine
learning, and causal forests, marketers can refine customer targeting (Chen
et al., 2020; Dekimpe, 2020; Pitt et al., 2020). These capabilities help
marketers create more effective and efficient planning strategies.
Huang and Rust (2021) developed a three-stage framework for
strategic marketing planning that incorporates the advantages of artificial
intelligence (AI). This framework describes different types of AI used to
optimize marketing processes: Mechanical AI is used to automate repetitive
marketing functions and activities; Thinking AI is used to process data to
make decisions; and Feeling AI is used to analyze interactions and human
emotions. This three-stage framework illustrates how AI can be used for
marketing research, strategy (segmentation, targeting, positioning), and
actions. In the marketing research phase, mechanical AI can be used to
collect data, thinking AI can be used to analyze the market, and feeling
AI can be used to gain customer insight. In the marketing strategy phase,
mechanical AI can be used for segmentation, thinking AI for targeting, and
feeling AI for positioning. At the marketing action stage, mechanical AI can
be used for standardization, thinking AI for personalization, and feeling AI
for relationalisation. This framework is applied to various areas of marketing
and according to the marketing 4P/4C framework to illustrate the strategic
use of AI (Shree, 2024)
The impact of AI on the marketing mix lies at the core of digital
transformation, revolutionizing marketing strategies. This influence allows
brands to create more targeted, effective, and personalized strategies by
offering innovative, data-driven solutions in product, price, distribution,
and promotion. The table below highlights the impact and application areas
of artificial intelligence on the marketing mix.
Ahmet Songur | 271

Table 1: The Impact of Artificial Intelligence on Key Marketing Mix Strategies

Product Price Promotion Place


• Development of • Creating prices • Creating a • New
new product in accordance unique customer distribution
• Personalization with the buyer’s experience channels
of the product power • Personalization • Continuous
• Automatic of comunication customer
suggestions to • Creating new support
the buyers value and • Automatization
• Creating added benefits for the of the sales
value for the customers
customer • Decreasing
disappointing
effect

Source: Buntak et al., 2021: p. 410.

4.2. Product Management


AI can tailor offers to meet customers’ needs. An AI-based marketing
analytics tool can enhance customer satisfaction by evaluating how well
product designs align with customer preferences (Dekimpe, 2020). During
product searches, preference weights assigned to product attributes help
marketers better understand the product recommendation system and adjust
marketing strategies for effective product management. Topic modeling
improves the system’s ability to innovate and design services, while deep
learning personalizes interest recommendations, helping to discover new
places (Antons & Breidbach, 2018; Dzyabura & Huser, 2019; Guo et al.,
2018).

4.3. Price Management


Pricing is a computationally intensive process that involves considering
multiple factors to determine the final price. The complexity of this process
is further heightened by real-time price adjustments driven by fluctuating
demand. In such a dynamic environment, a multi-armed bandit algorithm
powered by artificial intelligence can adjust prices in real-time (Misra
et al., 2019). For environments where prices change frequently, such
as e-commerce platforms, Bayesian inference within machine learning
algorithms can quickly align price points with competitor prices (Bauer &
Jannach, 2018). The most effective pricing algorithms integrate customer
preferences, competitor strategies, and supply networks to optimize dynamic
pricing (Dekimpe, 2020). On the application side, AI-powered tools such as
big data analytics are used for price adjustments and forecasting.
272 | Artificial Intelligence Marketing (AIM): Digital Transformation and Consumer Behaviour

4.4. Place Management


Product access and availability are critical components of the marketing
mix for enhancing customer satisfaction. Product distribution is a largely
mechanized and iterative process that depends on network relationships,
logistics, inventory management, storage, and transportation. AI is an
ideal solution for location management, with technologies such as cobots
for packaging, drones for delivery, and IoT systems for order tracking and
fulfillment (Huang & Rust, 2018). Both suppliers and customers benefit
from the standardization and automation of the distribution process. In
addition to its advantages in distribution management, AI also presents
opportunities for customer engagement in service contexts. Service robots,
programmed with emotional AI, can enhance customer interactions (Wirtz
et al., 2018). While tangible robots engage with customers, human elements
remain essential in complementing the service environment to ensure
customer satisfaction. AI-driven service process automation further provides
opportunities for performance and productivity improvements (Gür, 2022).

4.5. Promotion Management


Product access and availability are critical components of the marketing
mix for enhancing customer satisfaction. Product distribution is a largely
mechanized and iterative process that depends on network relationships,
logistics, inventory management, warehousing, and transportation. AI
provides an ideal solution for location management, utilizing technologies
such as cobots for packaging, drones for delivery, and IoT systems for
order tracking and fulfillment (Huang & Rust, 2018). Both suppliers
and customers benefit from the standardization and automation of the
distribution process. Beyond its advantages in distribution management, AI
also presents opportunities for customer engagement in service contexts.
Service robots programmed with emotional AI can significantly enhance
customer interactions (Wirtz et al., 2018). While physical robots engage
with customers, human elements remain crucial in complementing the
service environment to ensure customer satisfaction. AI-driven automation
of service processes also offers additional opportunities to improve
performance and productivity (Gür, 2022).
Table 2 outlines the key applications of AI in marketing and illustrates
the new opportunities it presents for companies to achieve a sustainable
competitive advantage in an increasingly digital and data-driven world.
Ahmet Songur | 273

Table 2. Major Implementations of AI in Marketing


Applications Description References
Digital • Analyzing consumer behavior, actions, and key (Yang et al., 2022;
Marketing indicators Paschen et al.,
• Accurate targeting and optimal timing 2021; Shah &
• Data processing across social media, email, and Shay, 2020; Syam
websites & Sharma, 2018;)
• Marketing automation: data flow, interactions,
and business outcomes
• Data collection, insights generation, predictions,
and automated decision-making
Reduction • Reduces human error in marketing processes (Ekramifard,
of human • Content development and optimization, such as A., Amintoosi,
mistakes email format personalization H., Seno, A.H.,
• Minimizes the risk of human error in decision- Dehghantanha,
making A., Parizi, 2020;
• Addresses data security concerns and safeguards Kitsios, F.;
against breaches Kamariotou,
• Enhances employee competence in protecting 2021; Tan et al.,
customer and company data 2016)
• Adapts to tackle cybersecurity challenges
• Optimizes marketing strategies, reducing the
need for excessive resources
Connect • Connects end-to-end business processes for a (Akansha Mer,
business seamless experience. 2022; Grewal
process • Marketers using AI achieve exceptional et al., 2020;
performance. Sadriwala, M. F. &
• Enables the creation of customized, human- Sadriwala, 2022;
centered marketing strategies. Yablonsky, 2019)
• Transforms customers into passionate brand
advocates.
• AI enhances interaction designs, making them
more engaging.
• Offers organizations the opportunity to elevate
marketing to a superior experience.
Analyse • Analyzes vast amounts of market data to predict (De Bruyn et al.,
massive user behavior. 2020; He et al.,
amounts of • Understands billions of search queries to assess 2020; Moudud-
market data purchase intent. Ul-Huq, 2014;
• Identifies gaps and facilitates appropriate V. Rutskiy, R.
actions. Mousavi, N.
• AI and machine learning extend beyond basic Chudopal, Y.E.
tools. Amrani, V.
• Fundamentally transforms business operations. Everstova, 2021)
• Increases business efficiency nearly threefold.
274 | Artificial Intelligence Marketing (AIM): Digital Transformation and Consumer Behaviour

Applications Description References


Deliver • Analyzes new data to provide customers with (Brooks, 2022;
valuable more relevant information. Makarius et al.,
information • Acts as a tool to drive marketing campaigns 2020; Purwanto
toward higher goals. P., Kuswandi K.,
• Combines advanced technology and human 2020)
intelligence for hyper-personalized, engaging
interactions.
• Delivers instant personalized advertising.
• Continuously collects data to guide future ad
content changes.
• Enables sellers to focus on results using personal
and behavioral data.
• Provides deeper insights into customer goals,
aspirations, and buying patterns.
Enable • Provides intelligent, simple, and convenient (Buntak et al.,
convenient customer support at every stage for an 2021; Sirajuddin
customer optimal experience. & P, 2020; Jatobá
support • Essential for ensuring a seamless and efficient et al., 2019; Fish
customer experience. & Ruby, 2009;)
• Automates repetitive marketing tasks to enhance
marketing automation.
• Captures real-time customer data and scales its
application.
• Simplifies sorting, organizing, and prioritizing
data.
• AI-powered marketing automation tools are
transforming marketing strategies.
• Next-generation platforms strengthen strategies
by addressing evolving needs like hyper-
personalized offers.
Better • Enables marketers to identify qualified leads, (Alyoshina, 2020;
marketing refine nurturing tactics, and create relevant K. Jarek, 2019;
automation content by integrating with marketing Tanase & Cosmin,
tool automation tools. 2018)
• Dynamic content emails, particularly one-to-
one messages, effectively reinforce a brand’s
message by delivering contextual emails that
capture subscribers’ attention.
• Dynamic content strategies ensure email
relevance by considering factors such as
geographic location, psychographics,
behavioral data, and insights.
Ahmet Songur | 275

Applications Description References


Ease • Provides actionable insights from complex data (Huang & Rust,
workload within a short time frame. 2021; Mohd
• Has the potential to significantly impact Javaid, Abid
marketing activities through predictive Haleem, Ravi
analytics. Pratap Singh,
• AI-driven predictive analytics unlocks substantial 2022; Vlačić et
value from existing data. al., 2021; Wirth,
• Predictive lead scoring offers an innovative 2018)
method for ranking and evaluating leads.
Speeds • Enhances data processing speed, accuracy, and (Kumar, Rajan,
up data security, allowing teams to focus on strategic Venkatesan, &
processing objectives while creating effective campaigns. Lecinski, 2019;
• Collects and tracks real-time tactical data, Davenport,
enabling immediate decision-making. Guha, Grewal, &
• Facilitates smarter, more objective decision- Bressgott, 2020;
making through data-driven reports. Raiter, 2021)
• AI automates repetitive and time-consuming
tasks, completing them efficiently and error-
free.
• Substantially reduces recruitment costs through
automation and AI-driven efficiencies.
Make • AI enhances consumer understanding, enabling (Rekha, Abdulla,
customer more customer-centric decision-making. & Asharaf,
centered • Provides external market intelligence by 2016; Paschen,
choices analyzing social media and web content. Kietzmann, &
• Enables marketers to quickly build detailed Kietzmann, 2019:
consumer profiles using big data. Feng, Park, Pitt,
• Consumer profiles encompass interactions, Kietzmann, &
campaign responses, habits, and other Northey, 2021)
relevant factors.
Examine • .Machine learning identifies the optimal times, (Mustak,
data about frequency, engaging content, and effective Salminen, Plé,
customer email subject lines for customers. & Wirtz, 2021;
• Complex algorithms personalize the web Olson & Levy,
experience for individual users. 2018;
• Data is analyzed to provide more relevant offers Vishnoi, Bagga,
tailored to each user. Sharma, & Wani,
• Predictive models estimate the likelihood of 2018)
leads converting into customers.
• These models can also determine the price
needed to convert leads or identify customers
likely to make repeat purchases.
276 | Artificial Intelligence Marketing (AIM): Digital Transformation and Consumer Behaviour

Applications Description References


Improve • AI enhances inventory control during peak (Aladayleh,
stock control demand, preventing over-buying and 2020; Wierenga,
maximizing revenue. 2010; Nalini,
• Every business has unique dynamic pricing and Radhakrishnan,
demand forecasting requirements. Yogi, Santhiya,
• Tailoring solutions based on specific products & Harivardhini,
and customer types is often the best approach 2021; Rodgers &
to meet business goals. Nguyen, 2022)
Customise • AI can create simulation models, personalize (Pedersen & Duin,
shopping shopping experiences, and offer product 2022; Khrais,
processes recommendations. 2020; Kumar,
• Companies, like Amazon, utilize AI to engage 2020)
customers by suggesting products based on
past purchases and searches.
• Intelligent technologies are evolving rapidly and
can outperform humans in certain areas.
• AI is surpassing humans in recognizing
marketing trends due to its vast data analysis
capabilities.
• These systems analyze data to predict consumer
buying patterns and enhance the user
experience.
Digital • AI helps achieve success in digital advertising by (Martínez-López
advertising targeting experts on platforms like Facebook, & Casillas, 2013;
Google, and Instagram. Boz & Kose,
• Ads are tailored by analyzing user data such as 2018; Kietzmann
gender, age, interests, etc. & Pitt, 2020;
• Marketers can leverage AI to analyze micro- Alawaad, 2021)
trends and forecast future trends.
• This enables more informed and strategic
decision-making.
• Companies can maximize return on investment
by minimizing digital advertising waste.
• AI, combined with IoT and connected devices,
is shaping the future of digital marketing.
Better • AI is used to enhance operational efficiency and (Rust & Huang,
customer improve the consumer experience. 2014; Güngör,
experience • Marketers can leverage these platforms to gain a 2020; Dwivedi et
deeper, more comprehensive understanding al., 2021)
of their target audience.
• The collected data helps boost conversions while
reducing the effort required from marketing
teams.
Ahmet Songur | 277

Applications Description References


Assisting • AI enables more effective customer interactions. (Bader & Kaiser,
marketers • AI marketing components analyze large 2019; Enholm,
customer datasets and provide tech-driven Papagiannidis,
solutions for future actions. Mikalef, &
• With the rise of digital media, big data has Krogstie, 2021;
expanded, allowing marketers to analyze Loureiro,
campaigns in greater depth and transfer data Guerreiro, &
across multiple channels. Tussyadiah, 2021)
• Effective AI solutions offer marketers a
centralized platform to manage large data
volumes efficiently.
Increased • AI reduces risk, boosts speed, enhances customer (Popova, 2017;
customer satisfaction, and increases marketing revenue. Deggans, Krulicky,
satisfaction • It enables fast decisions on media channel spend (Kovacova,
and revenue allocation, maximizes campaign value, and Valaskova, &
fosters interaction. Poliak, 2019;
• AI improves customer experience by delivering Tchelidze, 2019;
personalized messages at optimal times. Sajid, Haleem,
• It identifies high-risk customers and suggests Bahl, Javaid,
strategies to re-engage them. Goyal, & Mittal,
• It analyzes strategy effectiveness and ensures 2021)
appropriate resource allocation.
Develop- • AI assists with data collection, predictive (Yawalkar, 2019;
ment of a modeling, and testing. Sahai & Goel,
predictive • It sends personalized emails and enhances 2021;
model customer experience. Vrontis, Christofi,
• Identifies customer groups at risk of Pereira, Tarba,
abandonment or switching to competitors. Makrides, &
• Analyzes multi-channel activity to predict Trichina, 2022)
abandonment and improve engagement.
• Keeps users engaged with relevant offers, alerts,
and emails.
• Combining AI-powered abandonment
prediction with personalized content boosts
engagement and revenue.
Learning • AI helps marketing teams understand customer (Siau & Wang,
about preferences and demographics. 2018;
customer • This allows for personalized experiences tailored Chatterjee,
preferences to each customer. Chaudhuri,
• Data can create detailed customer profiles, such Vrontis, Thrassou,
as how they respond to headlines or visuals. & Ghosh, 2021;
• These insights inform and improve future Spreitzenbarth,
marketing messages. Stuckenschmidt, &
Bode, 2021)
278 | Artificial Intelligence Marketing (AIM): Digital Transformation and Consumer Behaviour

Applications Description References


Make better • AI enhances insights by analyzing both (Prabowo,
decisions quantitative and qualitative data. Murdiono,
• In Google Ads, AI helps focus on high-level Hidayat, Rahayu,
decisions, like campaign planning. & Sutrisno, 2019;
• It enables more targeted campaigns and better Farrokhi, Shirazi,
ROI by processing large data sets. Hajli, & Tajvidi,
• Agencies can leverage AI to analyze data, predict 2020; Boddu,
trends, and improve brand quality. Santoki, Khurana,
• AI fosters the creation of innovative, targeted Koli, Rai, &
ads. Agrawal, 2022)
• Agencies can use AI to increase revenue while
reducing costs.
Target • Companies must understand their customers’ (Daqar & Smoudy,
audience needs and expectations. 2019; Lies, 2019;
• AI marketing helps deliver more personalized Dubey, Bryde,
experiences. Blome, Roubaud,
• It enhances the efficiency of conversion & Giannakis,
management solutions. 2021; Giroux,
• Marketers can address strategic challenges Kim, Lee, & Park,
through the analysis of sophisticated 2022)
communications.
• As consumer expectations evolve, there’s
increasing demand for customized
experiences.
Deliver • Helps marketers gain deeper insights into their (Pangkey, Furkan,
the right customers. & Herman, 2019;
message in • Building a comprehensive profile involves Li, Cao, Ye, &
time collecting data from all customer interactions. Yue, 2021; Zhao
• Enables the creation of personalized content and & Cai; 2021)
improved campaigns.
• Facilitates the creation of innovative digital
advertising using online data.
Assist • Helps businesses understand their customers and (Haleem &
businesses deliver personalized experiences. Javaid, 2019;
• Companies can target using purchase history Singh, Flaherty,
and social media data. Sohi, Deeter-
• Plays a key role in optimizing ad performance. Schmelz, Habel,
• Social media platforms use AI to automate ads Le Meunier-
and analyze performance. FitzHugh,
• Improves campaign performance by optimizing & Onyemah,
targeting and ad spending. 2019; Kaiyp
& Alimanova,
2020; Ahmed &
Ganapathy, 2021)

Source: Haleem et al., 2022: pp. 124-127.


Ahmet Songur | 279

5. Examples of Artificial Intelligence in Marketing


A survey found that 46% of respondents reported that their interaction
with technology increased their trust in a brand and fostered a positive
perception. Alibaba has integrated artificial intelligence and smart clothing
labels into fashion retail by launching a store called ‘FashionAI’ in Hong Kong.
This system uses product-recognition tags and smart mirrors that suggest
complementary items, complete with garment descriptions. Alibaba’s next
goal is to allow customers to create a virtual wardrobe from clothes they’ve
tried on or touched while visiting the store. This innovative technology has
been developed in response to evolving consumer expectations (Norris,
2024).
A survey revealed that 46% of respondents stated their interaction
with technology enhanced their trust in a brand and cultivated a positive
perception. Alibaba has integrated artificial intelligence and smart clothing
labels into fashion retail by launching a store named ‘FashionAI’ in Hong
Kong. This system employs product recognition tags and smart mirrors
that suggest complementary items along with garment descriptions.
Alibaba’s next goal is to allow customers to build a virtual wardrobe from
clothes they’ve tried on or interacted with during their visit to the store.
This innovative technology was developed in response to the evolving
expectations of consumers (Norris, 2024).
Netflix offers personalized content recommendations through artificial
intelligence applications. The platform analyzes users’ viewing history,
preferences, and reactions to various series, documentaries, and films. This
AI-driven system processes billions of data transactions to recommend
content, forming a significant portion of the content users discover
(Pegasusone, 2025).
Netflix offers personalized content recommendations through AI
applications. The platform analyzes users’ viewing history, preferences,
and reactions to various series, documentaries, and films. The AI-powered
system processes billions of data transactions to recommend content,
which constitutes a significant portion of the content discovered by users
(Pegasusone, 2025).
Starbucks has developed an AI strategy based on predictive analytics,
utilizing loyalty cards and mobile applications to collect and analyze
consumer data. As part of this strategy, announced in 2016, personalized
marketing messages and recommendations were sent to customers.
Additionally, voice-command ordering was integrated into the mobile app
280 | Artificial Intelligence Marketing (AIM): Digital Transformation and Consumer Behaviour

to enhance the user experience. These AI applications contributed to the


company’s 11% annual revenue growth in 2018, compared to the previous
year (Macrotrends, 2025).
Unilever has leveraged artificial intelligence to align insights from
reference pools, such as popular media content and music, with food
consumption trends. Through these analyses, a connection was discovered
between breakfast and ice cream consumption. Seizing this trend as an
opportunity, Unilever developed cereal-flavored ice cream and the concept
of “Breakfast for Desserts,” which has since become an industry standard
(Spiceworks, 2019).

6. Conclusion
Innovations such as personalization, speech and image recognition,
chatbots, churn prediction, dynamic pricing, and customer insights powered
by artificial intelligence marketing are increasingly making traditional
marketing techniques less effective. While AIM (Artificial Intelligence
Marketing) technologies are rapidly reshaping marketing strategies and
business models, some traditional areas of market research may be replaced
by machines. This shift is leading to the elimination of jobs requiring fewer
technical skills and the emergence of new business areas demanding high
potential and advanced expertise. Today, AIM is not only transforming
marketing strategies but also influencing customer behavior. This change
marks a significant evolution in the marketing world, and it will be fascinating
to see how the field continues to evolve in the future.
Data analytics is one of the most significant benefits of AI in marketing.
This technology analyzes large volumes of data and provides marketers
with real and actionable insights. Artificial intelligence (AI) is emerging as
a crucial tool for enhancing the customer experience. For instance, a Harley
Davidson dealership in New York tripled its revenue and generated 2930%
more leads by using predictive analytics on an AI-based marketing platform,
highlighting the potential of AI in marketing (Anoop MR, 2021).
Several studies have highlighted the effectiveness of AI in enhancing
customer experiences. For example, Nguyen and Sidorova (2018)
demonstrated that AI-powered chatbots improve customer interactions
(Nguyen & Sidorova, 2018). Additionally, Gacanin and Wagner (2019)
discussed the potential of AI and machine learning to generate significant
business value, while addressing the challenges of implementing autonomous
customer experience management (Gacanin & Wagner, 2019). Chatterjee et
al. (2019) emphasized how AI analyzes customer habits, buying behavior,
Ahmet Songur | 281

and preferences to deliver personalized experiences (Chatterjee et al.,


2019). Seranmadevi and Kumar (2019) outlined AI’s key role in customer
relationship management and user interface applications (Seranmadevi &
Kumar, 2019). Sujata et al. (2019) noted the transformation of traditional
stores into “smart stores” with AI applications, leading to supply chain
efficiencies and enhanced customer experience (Sujata et al., 2019). Finally,
Sha and Rajeswari (2019) pointed out that AI-supported technologies in
the e-commerce sector have strengthened consumer-brand relationships and
product interactions by monitoring consumers’ five senses (Sha & Rajeswari,
2019). Maxwell et al. (2011) found that AI enhances marketing decision-
making by improving the efficiency of data processing (Maxwell et al.,
2011). Wisetsri et al. (2021) conducted a systematic review of the literature
on AI in marketing research. Their bibliometric analysis, which covered
more than 500 articles published between 1995 and 2020, highlighted the
key contributors, sources, and scientific actors in the field, and explored the
impact of AI on marketing. In their study, Davenport et al. (2020) proposed
a framework for understanding AI’s influence on marketing strategies and
customer behavior. They noted that while the short- and medium-term
impact of AI may be more limited, its effectiveness will increase when it
augments human managers rather than replacing them. Soni et al. (2020)
explored the impact of AI on business, offering a comprehensive perspective
from innovation and research to market adoption and future business model
changes. They identified two key drivers behind AI’s emergence as the
primary technology for over-automation and discussed the concept of the
“AI divide,” or the “dark side of AI.” Shahid and Li (2019) conducted a
qualitative study with marketing professionals from various companies to
emphasize the benefits of integrating AI into marketing strategies, while
also highlighting technical compliance as one of the biggest challenges in
this process. Overgoor et al. (2019) provided a detailed explanation of how
an industry-standard data mining framework can be applied to develop AI
solutions for marketing problems, supported by a compelling case study on
automated image scoring for digital marketing. Quasim and Chattopadhyay
(2015) explored various types of forecasting and artificial intelligence (AI)
techniques used in business forecasting, providing insights into promising
AI approaches for this field. Kim (2014) conducted in-depth interviews
with 20 marketing executives to examine the topology and characteristics of
big data marketing strategies, emphasizing the business implications of big
data analytics. Amado et al. (2018) evaluated the application of big data in
marketing, noting the growing interest in this area and urging companies
to enhance their efforts in developing big data capabilities. Özçelik and
282 | Artificial Intelligence Marketing (AIM): Digital Transformation and Consumer Behaviour

Varnalı (2019) examined the psychological aspects and consumer behaviors


related to the effectiveness of customized online advertising using behavioral
targeting. They concluded that consumers’ promotional focus significantly
influences their perceptions of the informativeness and entertainment value of
tailored ads. Simon (2019) discussed the key trends in artificial intelligence,
highlighting the uncertainty surrounding demand from both business and
consumer sides, along with the legal, ethical, and socio-economic challenges
that may impede the widespread deployment of AI technologies.
Modern technology exists as a holistic system, with all its components
interconnected. It is impossible to embrace only the positive aspects of
technology while avoiding its negative consequences. As a powerful and
influential force, technology inherently presents trade-offs, often leading to
the gradual erosion of individual freedoms. In many instances, society is
compelled to adapt to these changes by integrating new technological tools
(Kaczynski, 2013).
AI data must be protected and assessed within its ethical context. As
advancements in artificial intelligence (AI) significantly transform marketing
strategies, the question of how these changes will fit within personal data
protection frameworks becomes increasingly critical. These regulations are
designed to strengthen data protection and give individuals greater control
over their personal information, which directly impacts AI-driven strategies
like targeted advertising, customer analytics, and personalized marketing.
AI’s capacity for large-scale data processing and automated decision-making
is an area that requires careful reassessment in the context of personal
data protection. Regulatory principles, such as data minimization, explicit
consent, and accountability for algorithmic decisions, require marketers to
make their AI-driven solutions more transparent and accountable. In this
context, ethical principles such as transparency, fairness, non-maleficence,
responsibility, and privacy will play a key role in shaping the adoption
and implementation of AI in marketing. For instance, fundamental ethical
requirements for AI usage include ensuring that algorithms are free from
bias and that consumer data is used responsibly. The level of ethical practice
in AI will depend on factors such as an individual’s awareness of data rights,
the ethical policies of companies, and the regulatory oversight mechanisms
in place. When these ethical principles are upheld, AI can evolve into
a trustworthy and sustainable marketing tool that benefits consumers,
businesses, and other stakeholders.
Ahmet Songur | 283

References
Ahmed, A. A. A., & Ganapathy, A. (2021). Creation of automated content with
embedded artificial intelligence: A study on learning management system
for educational entrepreneurship. Academy of Entrepreneurship Journal,
27(3), 1–10.
Alqahtani, H., Kavakli-Thorne, M., & Kumar, G. (2021). Applications of gene-
rative adversarial networks (GANs): An updated review. Archives of Com-
putational Methods in Engineering, 28, 525–552. [Link]
s11831-019-09388-y
Arf, C. (1959). Makine düşünebilir mi ve nasıl düşünebilir? Atatürk Üniversitesi
– Üniversite Çalışmalarını Muhite Yayma ve Halk Eğitimi Yayınları Kon-
feranslar Serisi No: 1, 91-103. Erzurum.
Mer, A. S. V. (2022). Artificial intelligence disruption on the brink of revo-
lutionizing HR and marketing functions. In S. D. S. Balamurugan,
S. Pathak, A. Jain, S. Gupta, & S. Sharma (Eds.), Impact of artifici-
al intelligence on organizational transformation (pp. 1–19). [Link]
org/10.1002/9781119710301.ch1
Aladayleh, K. (2020). A framework for integration of artificial intelligence into
digital marketing in Jordanian commercial banks. Journal of Innovation in
Digital Marketing, 1(1), 22–27.
Alawaad, H. A. (2021). The role of artificial intelligence (AI) in public relations
and product marketing in modern organizations. Turkish Journal of Com-
puter and Mathematics Education (TURCOMAT), 12(14), 3180–3187.
Alyoshina, I. V. (2019). Artificial intelligence in an age of digital globalizati-
on. In Proceedings of the International Conference on Technology & Ent-
repreneurship in Digital Society (pp. 26–30). [Link]
teds-2019-26-30
Amado, A., Cortez, P., Rita, P., & Moro, S. (2018). Research trends on Big
Data in Marketing: A text mining and topic modeling based literature
analysis. European Research on Management and Business Economics, 24(1),
1–7. [Link]
Anoop, M. R. (2021). Artificial intelligence and marketing. 12(4), 1247–1256.
Antons, D., & Breidbach, C. F. (2018). Big Data, Big Insights? Advancing
service innovation and design with machine learning. Journal of Service
Research, 21(1), 17–39. [Link]
Bader, V., & Kaiser, S. (2019). Algorithmic decision-making? The user inter-
face and its role for human involvement in decisions supported by artifi-
cial intelligence. Organization, 26(5), 655–672.
Bauer, J., & Jannach, D. (2018). Optimal pricing in e-commerce based on
sparse and noisy data. Decision Support Systems, 106, 53–63. [Link]
org/10.1016/[Link].2017.12.002
284 | Artificial Intelligence Marketing (AIM): Digital Transformation and Consumer Behaviour

Boddu, R. S. K., Santoki, A. A., Khurana, S., Koli, P. V., Rai, R., & Agraw-
al, A. (2022). An analysis to understand the role of machine learning,
robotics and artificial intelligence in digital marketing. Materials Today:
Proceedings, 56, 2288–2292.
Boz, H., & Kose, U. (2018). Emotion extraction from facial expressions by us-
ing artificial intelligence techniques. BRAIN. Broad Research in Artificial
Intelligence and Neuroscience, 9(1), 5–16.
Brooks, R., Nguyen, D., Bhatti, A., Allender, S., Johnstone, M., Lim, C. P., &
Backholer, K. (2022). Use of artificial intelligence to enable dark nudges
by transnational food and beverage companies: Analysis of company doc-
uments. In Public Health Nutrition, 1–9.
Brooks, T. (2022). Introduction. In T. Brooks (Ed.), Political Emotions. Pal-
grave Macmillan. [Link]
Bruyn, A. D., Viswanathan, V., Beh, Y. S., Brock, J. K. U., & von Wangen-
heim, F. (2020). Artificial intelligence and marketing: Pitfalls and oppor-
tunities. Journal of Interactive Marketing, 51, 91–105.
Buntak, K., Kovačić, M., & Mutavdžija, M. (2021). Application of artificial
intelligence in the business. International Journal for Quality Research,
15(2), 403–416. [Link]
Chatterjee, S., Chaudhuri, R., Vrontis, D., Thrassou, A., & Ghosh, S. K.
(2021). Adoption of artificial intelligence-integrated CRM systems in
agile organizations in India. Technological Forecasting and Social Change,
168, 120783.
Chatterjee, S., Ghosh, S. K., Chaudhuri, R., & Nguyen, B. (2019). Are CRM
systems ready for AI integration?: A conceptual framework of organiza-
tional readiness for effective AI-CRM integration. Bottom Line, 32(2),
144–157. [Link]
Daqar, M. A. A., & Smoudy, A. K. (2019). The role of artificial intelligence on
enhancing customer experience. International Review of Management and
Marketing, 9(4), 22.
Davenport, T., Guha, A., Grewal, D., & Bressgott, T. (2020). How arti-
ficial intelligence will change the future of marketing. Journal of the
Academy of Marketing Science, 48(1), 24–42. [Link]
s11747-019-00696-0
De Bruyn, A., Viswanathan, V., Beh, Y. S., Brock, J. K. U., & von Wangen-
heim, F. (2020). Artificial intelligence and marketing: Pitfalls and op-
portunities. Journal of Interactive Marketing, 51, 91–105. [Link]
org/10.1016/[Link].2020.04.007
Deggans, J., Krulicky, T., Kovacova, M., Valaskova, K., & Poliak, M. (2019).
Cognitively enhanced products, output growth, and labor market chang-
Ahmet Songur | 285

es: Will artificial intelligence replace workers by automating their jobs?


Economics, Management, and Financial Markets, 14(1), 38–43.
Dekimpe, M. G. (2020). Retailing and retailing research in the age of big data
analytics. International Journal of Research in Marketing, 37(1), 3–14.
[Link]
Dubey, R., Bryde, D. J., Blome, C., Roubaud, D., & Giannakis, M. (2021).
Facilitating artificial intelligence powered supply chain analytics through
alliance management during the pandemic crises in the B2B context. In-
dustrial Marketing Management, 96, 135–146.
Dwivedi, Y. K., Hughes, L., Ismagilova, E., Aarts, G., Coombs, C., Crick, T.,
& Williams, M. D. (2021). Artificial intelligence (AI): Multidisciplinary
perspectives on emerging challenges, opportunities, and agenda for re-
search, practice, and policy. International Journal of Information Manage-
ment, 57, 101994.
Dzyabura, D., & Huser, R. J. (2019). Recommending products when consum-
ers. Marketing Science, 38(3), 365–541.
Ekramifard, A., Amintoosi, H., Seno, A. H., Dehghantanha, A., & Parizi,
R. M. (2020). A systematic literature review of integration of block-
chain and artificial intelligence. In R. Choo, K. K., Dehghantanha,
A., & Parizi (Eds.), Blockchain cybersecurity, trust and privacy (pp. 147–
160). Advances in Information Security, Vol. 79. Springer. [Link]
org/10.1007/978-3-030-38181-3_8
Enholm, I. M., Papagiannidis, E., Mikalef, P., & Krogstie, J. (2021). Artificial
intelligence and business value: A literature review. Information Systems
Frontiers, 1(26).
Farrokhi, A., Shirazi, F., Hajli, N., & Tajvidi, M. (2020). Using artificial intelli-
gence to detect crisis related to events: Decision making in B2B by artifi-
cial intelligence. Industrial Marketing Management, 91, 257–273.
Feng, C. M., Park, A., Pitt, L., Kietzmann, J., & Northey, G. (2021). Artificial
intelligence in marketing: A bibliographic perspective. Australasian Mar-
keting Journal, 29(3), 252–263.
Fish, K., & Ruby, P. (2009). An artificial intelligence foreign market screening
method for small businesses. International Journal of Entrepreneurship,
13(1), 65–91.
Gacanin, H., & Wagner, M. (2019). Artificial intelligence paradigm for custo-
mer experience management in next-generation networks: Challenges and
perspectives. IEEE Network, 33(2), 188–194. [Link]
MNET.2019.1800015
Gavilanes, J. M., Flatten, T. C., & Brettel, M. (2018). Content strategies for
digital consumer engagement in social networks: Why advertising is an
286 | Artificial Intelligence Marketing (AIM): Digital Transformation and Consumer Behaviour

antecedent of engagement. Journal of Advertising, 47(1), 4–23. https://


[Link]/10.1080/00913367.2017.1405751
Giroux, M., Kim, J., Lee, J. C., & Park, J. (2022). Artificial intelligence and
declined guilt: Retailing morality comparison between human and AI.
Journal of Business Ethics, 1(15).
Gkikas, D., & Theodoridis, P. (2022). AI in consumer behavior. In L. C. Virvou,
M. Tsihrintzis, G. A. Tsoukalas, & L. H. Jain (Eds.), Advances in artificial in-
telligence-based technologies: Learning and analytics in intelligent systems (pp.
147–176). Springer. [Link]
Greenwald, T. (2011, October 13). How smart machines like iP-
hone 4S are quietly changing your industry. Forbes. htt-
ps://[Link]/sites/tedgreenwald/2011/10/13/
how-smart-machines-like-iphone-4s-are-quietly-changing-your-industry/
Grewal, D., Hulland, J., Kopalle, P. K., & Karahanna, E. (2020). The future
of technology and marketing: A multidisciplinary perspective. Journal of
the Academy of Marketing Science, 48(1), 1–8. [Link]
s11747-019-00711-4
Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D.,
Ozair, S., Courville, A., & Bengio, Y. (2014). Generative adversarial
networks. Advances in Neural Information Processing Systems, 3, Article
10.1145/3422622.
Guo, J., Zhang, W., Fan, W., & Li, W. (2018). Combining geographical and
social influences with deep learning for personalized point-of-interest re-
commendation. Journal of Management Information Systems, 35(4), 1121–
1153. [Link]
Gupta, R., Nair, K., Mishra, M., Ibrahim, B., & Bhardwaj, S. (2024). Adop-
tion and impacts of generative artificial intelligence: Theoretical un-
derpinnings and research agenda. International Journal of Information
Management Data Insights, 4(1), 100232. [Link]
jjimei.2024.100232
Güngör, H. (2020). Creating value with artificial intelligence: A multi-stakehol-
der perspective. Journal of Creating Value, 6(1), 72–85.
Gür, Y. E. (2022). Yapay zekâ ve pazarlama ilişkisi. Fırat Üniversitesi Uluslarara-
sı İktisadi ve İdari Bilimler Dergisi, 6(2), 131–148. [Link]
tr/tr/pub/fuuiibfdergi/issue/74071/1120189
Haleem, A., & Javaid, M. (2019). Additive manufacturing applications in In-
dustry 4.0: A review. Journal of Industrial Integration and Management,
4(4), 1930001.
Haleem, A., Javaid, M., Asim, M., Pratap, R., & Suman, R. (2022). Artifici-
al intelligence (AI) applications for marketing: A literature-based study.
Ahmet Songur | 287

International Journal of Intelligent Networks, 3(September), 119–132. htt-


ps://[Link]/10.1016/[Link].2022.08.005
Han, R., Lam, H. K. S., Zhan, Y., Wang, Y., Dwivedi, Y. K., & Tan, K. H.
(2021). Artificial intelligence in business-to-business marketing: A bib-
liometric analysis of current research status, development and future di-
rections. Industrial Management and Data Systems, 121(12), 2467–2497.
[Link]
Harmeling, C. M., Moffett, J. W., Arnold, M. J., & Carlson, B. D. (2017).
Toward a theory of customer engagement marketing. Journal of the Aca-
demy of Marketing Science, 45(3), 312–335. [Link]
s11747-016-0509-2
He, M., Li, Z., Liu, C., Shi, D., & Tan, Z. (2020). Deployment of artificial
intelligence in real-world practice: Opportunity and challenge. Asia-Pa-
cific Journal of Ophthalmology, 9(4), 299–307. [Link]
APO.0000000000000301
Higgins, I., Matthey, L., Pal, A., Burgess, C. P., Glorot, X., Botvinick, M. M.,
& Mohamed, S. L. (2017). A beta-VAE: Learning basic visual concepts
with a constrained variational framework. ICLR (Poster), 3, 1–22. htt-
ps://[Link]/10.1177/1078087408328050
Huang, M. H., & Rust, R. T. (2017). Technology-driven service strategy.
Journal of the Academy of Marketing Science, 45(6), 906–924. [Link]
org/10.1007/s11747-017-0545-6
Huang, M. H., & Rust, R. T. (2021). A strategic framework for artificial intel-
ligence in marketing. Journal of the Academy of Marketing Science, 49(1),
30–50. [Link]
Jain, P., & Aggarwal, K. (2020). Transforming marketing with artificial intelli-
gence. International Research Journal of Engineering and Technology, 7(7),
3964–3976. [Link]
Jarek, K., & Mazurek, G. (2019). Marketing and artificial intelligence. Central
European Business Review, 8(2).
Jatobá, M., Santos, J., Gutierriz, I., Moscon, D., Fernandes, P. O., & Teixeira,
J. P. (2019). Evolution of artificial intelligence research in human resour-
ces. Procedia Computer Science, 164, 137–142. [Link]
procs.2019.12.165
Javaid, M., Haleem, A., Singh, R. P., & Suman, R. (2022). Artificial intelli-
gence applications for Industry 4.0: A literature-based study. Journal of
Industrial Integration and Management, 7(1), 83–111.
Jarek, K., & Mazurek, G. (2019). Marketing and artificial intelligence. Central
European Business Review, 8(2).
Kaczynski, T. J. (2013). Sanayi toplumu ve geleceği. Kaos Yayınları.
288 | Artificial Intelligence Marketing (AIM): Digital Transformation and Consumer Behaviour

Kaiyp, K., & Alimanova, M. (2020). Improving indicators of digital marketing


using artificial intelligence. Suleyman Demirel University Bulletin: Natural
and Technical Sciences, 52(1).
Khrais, L. T. (2020). Role of artificial intelligence in shaping consumer demand
in e-commerce. Future Internet, 12(12), 226.
Kietzmann, J., & Pitt, L. F. (2020). Artificial intelligence and machine learning:
What managers need to know. Business Horizons, 63(2), 131–133.
Kim, K. Y. (2014). Business intelligence and marketing insights in an era of big
data: The q-sorting approach. KSII Transactions on Internet and Informa-
tion Systems, 8(2), 567–582. [Link]
Kitsios, F., & Kamariotou, M. (2021). Artificial intelligence and business stra-
tegy towards digital transformation: A research agenda. Sustainability,
13. [Link]
Kumar, T. S. (2020). Data mining-based marketing decision support system
using hybrid machine learning algorithm. Journal of Artificial Intelligence,
2(3), 185–193.
Kumar, V., Rajan, B., Venkatesan, R., & Lecinski, J. (2019). Understanding
the role of artificial intelligence in personalized engagement marketing.
California Management Review, 61(4), 135–155.
Li, R., Cao, Z., Ye, H., & Yue, X. (2021). Application and development trend
of artificial intelligence in enterprise marketing. Journal of Physics: Confe-
rence Series, 1881, 022032.
Lies, J. (2019). Marketing intelligence and big data: Digital marketing tech-
niques on their way to becoming social engineering techniques in mar-
keting. International Journal of Interactive Multimedia and Artificial Intel-
ligence, 5(5).
Macrotrends Homepage. (2025). [Link]
SBUX/starbucks/revenue
Madanchian, M. (2024). Generative AI for consumer behavior predicti-
on: Techniques and applications. Sustainability, 16, 9963. [Link]
org/10.3390/su16229963
Makarius, E. E., Mukherjee, D., Fox, J. D., & Fox, A. K. (2020). Rising with
the machines: A sociotechnical framework for bringing artificial intelli-
gence into the organization. Journal of Business Research, 120, 262–273.
[Link]
Martínez de Pisón, F. J., Urraca, R., Quintián, H., & Corchado, E. (Eds.).
(2017). Hybrid artificial intelligent systems: Proceedings of the 12th inter-
national conference, HAIS 2017, La Rioja, Spain, June 21–23, 2017.
Springer.
Ahmet Songur | 289

Martínez-López, F. J., & Casillas, J. (2013). Artificial intelligence-based systems


applied in industrial marketing: An historical overview, current and futu-
re insights. Industrial Marketing Management, 42(4), 489–495.
Maxwell, A. L., Jeffrey, S. A., & Lévesque, M. (2011). Business angel early
stage decision making. Journal of Business Venturing, 26(2), 212–225. ht-
tps://[Link]/10.1016/[Link].2009.09.002
Mer, A., & Virdi, A. S. (2022). Artificial intelligence disruption on the brink of
revolutionizing HR and marketing functions. In Impact of Artificial Intel-
ligence on Organizational Transformation (pp. 1–19).
Misra, K., Schwartz, E. M., & Abernethy, J. (2019). Dynamic online pricing
with incomplete information using multiarmed bandit experiments. Mar-
keting Science, 38(2), 226–252. [Link]
Mohd Javaid, A., Haleem, A., Singh, R. P., & R. S. (2022). Artificial intel-
ligence applications for Industry 4.0: A literature-based study. Journal
of Industrial Integration and Management, 7(1). [Link]
S2424862221300040
Moudud-Ul-Huq, S. (2014). The role of artificial intelligence in the develop-
ment of accounting systems: A review. UP Journal of Accounting Research
& Audit Practices, 13(2), 7–19.
Mustak, M., Salminen, J., Plé, L., & Wirtz, J. (2021). Artificial intelligence in
marketing: Topic modeling, scientometric analysis, and research agenda.
Journal of Business Research, 124, 389–404.
Nalini, M., Radhakrishnan, D. P., Yogi, G., Santhiya, S., & Harivardhini, V.
(2021). Impact of artificial intelligence (AI) on marketing. International
Journal of Aquatic Science, 12(2), 3159–3167.
Nguyen, Q. N., & Sidorova, A. (2018). Understanding user interactions with a
chatbot: A self-determination theory approach. Emergent Research Forum
(ERF), American Conference in Information Systems, 1–5.
Norris, P. (2024). 26 impressive examples of AI in marketing. [Link]
[Link]/10-examples-of-ai-in-marketing/
Olson, C., & Levy, J. (2018). Transforming marketing with artificial intelligen-
ce. Applied Marketing Analytics, 3(4), 291–297.
Overgoor, G., Chica, M., Rand, W., & Weishampel, A. (2019). Let-
ting the computers take over: Using AI to solve marketing prob-
lems. California Management Review, 61(4), 156–185. [Link]
org/10.1177/0008125619859318
Ozcelik, A. B., & Varnali, K. (2019). Effectiveness of online behavioral targeting:
A psychological perspective. Electronic Commerce Research and Applicati-
ons, 33, (August 2018). [Link]
290 | Artificial Intelligence Marketing (AIM): Digital Transformation and Consumer Behaviour

Pangkey, F. M., Furkan, L. M., & Herman, L. E. (2019). Pengaruh artificial


intelligence dan digital marketing terhadap minat beli konsumen. Jurnal
Magister Manajemen Unram, 8(3).
Paschen, J., Kietzmann, J., & Kietzmann, T. C. (2019). Artificial intelligence
(AI) and its implications for market knowledge in B2B marketing. Jour-
nal of Business & Industrial Marketing.
Paschen, J., Paschen, U., Pala, E., & Kietzmann, J. (2021). Artificial intelli-
gence (AI) and value co-creation in B2B sales: Activities, actors and re-
sources. Australasian Marketing Journal, 29(3), 243–251. [Link]
g/10.1016/[Link].2020.06.004
Pegasusone homepage. (2025). [Link]
real-world-ai-use-case/
Pitt, C. S., Bal, A. S., & Plangger, K. (2020). New approaches to psychograp-
hic consumer segmentation: Exploring fine art collectors using artificial
intelligence, automated text analysis and correspondence analysis. Euro-
pean Journal of Marketing, 54(2), 305–326. [Link]
EJM-01-2019-0083
Popova, E. A. (2017). Using artificial intelligence in marketing. European Scien-
ce, 6, 62–63.
Prabowo, S. H. W., Murdiono, A., Hidayat, R., Rahayu, W. P., & Sutrisno,
S. (2019). Digital marketing optimization in artificial intelligence era by
applying consumer behavior algorithm. Asian Journal of Entrepreneurship
and Family Business, 3(1), 41–48.
Prosvetov, A. V. (2019). GAN for recommendation system. Journal of Phy-
sics: Conference Series, 1405(1). [Link]
1405/1/012005
Purwanto, P., Kuswandi, K., & Fatmah, F. (2020). Interactive applications with
artificial intelligence: The role of trust among digital assistant users. For-
sait, 14(2), 64–75. [Link]
Quasim, T., & Chattopadhyay, R. (2015). Artificial intelligence as a business
forecasting and error handling tool. International Journal of Advanced
Computer Technology, 4(2), 1534–1537.
Raiter, O. (2021). Segmentation of bank consumers for artificial intelligence
marketing. International Journal of Contemporary Financial Issues, 1(1),
39–54.
Rekha, A. G., Abdulla, M. S., & Asharaf, S. (2016). Artificial intelligence mar-
keting: An application of a novel lightly trained support vector data desc-
ription. Journal of Information & Optimization Sciences, 37(5), 681–691.
Rodgers, W., & Nguyen, T. (2022). Advertising benefits from ethical artificial
intelligence algorithmic purchase decision pathways. Journal of Business
Ethics, 1(19).
Ahmet Songur | 291

Rouhani, S., Ashrafi, A., Zare Ravasan, A., & Afshari, S. (2016). The impact
model of business intelligence on decision support and organizational
benefits. Journal of Enterprise Information Management, 29(1), 19–50.
[Link]
Rust, R. T., & Huang, M. H. (2014). The service revolution and the transfor-
mation of marketing science. Marketing Science, 33(2), 206–221.
Rutskiy, V., Mousavi, R., Chudopal, N., Amrani, Y. E., Everstova, V., &
Tsarev, R. (2021, October). Artificial intelligence as a disruptive tech-
nology for digital marketing. In Proceedings of the Computational Methods
in Systems and Software (pp. 895–900). Springer, Cham.
Sadriwala, M. F., & Sadriwala, K. F. (2022). Perceived usefulness and ease of
use of artificial intelligence on marketing innovation. International Jour-
nal of Innovation in the Digital Economy (IJIDE), 13(1), 1–10. [Link]
org/10.4018/IJIDE.292010
Sahai, S., & Goel, R. (2021). Impact of artificial intelligence in changing trends
of marketing. Applications of Artificial Intelligence in Business and Finance,
Modern Trends, 221.
Sajid, S., Haleem, A., Bahl, S., Javaid, M., Goyal, T., & Mittal, M. (2021). Data
science applications for predictive maintenance and materials science in
context to Industry 4.0. Materials Today: Proceedings, 45, 4898–4905.
Seranmadevi, R., & Senthil Kumar, A. (2019). Experiencing the AI emergence
in Indian retail – Early adopters approach. Management Science Letters,
9(1), 33–42. [Link]
Sha, S. N., & Rajeswari, M. (2019). Creating a brand value and consumer sat-
isfaction in e-commerce business using artificial intelligence. SSRN Elec-
tronic Journal. [Link]
Shah, D., & Shay, E. (2020). How and why artificial intelligence, mixed re-
ality, and blockchain technologies will change marketing we know
today. In Handbook of Advances in Marketing in an Era of Disrup-
tions: Essays in Honour of Jagdish N. Sheth (pp. 377–390). [Link]
org/10.4135/9789353287733.n32
Shahid, M. Z., & Li, G. (2019). Impact of artificial intelligence in marketing:
A perspective of marketing professionals of Pakistan. Global Journal of
Management And Business Research, 19(2). [Link]
[Link]/[Link]/GJMBR/article/view/2704
Sheth, J. (2019). How and why artificial intelligence, mixed reality, and block-
chain technologies will change marketing as we know today. In Essays in
Honour of Jagdish N. (pp. 377–390).
Shree, Krisha, P. (2024). Beyond boundaries: Examining the coming together
of AI and marketing. International Journal of Scientific Research in Enginee-
ring and Management, 8(1), 1. [Link]
292 | Artificial Intelligence Marketing (AIM): Digital Transformation and Consumer Behaviour

Siau, K., & Wang, W. (2018). Building trust in artificial intelligence, machine
learning, and robotics. Cutter Business Technology Journal, 31(2), 47–53.
Rutskiy, V., Mousavi, R., Chudopal, N., Amrani, Y. E., Everstova, V., &
Tsarev, R. (2021, October). Artificial intelligence as a disruptive tech-
nology for digital marketing. In Proceedings of the Computational Methods
in Systems and Software (pp. 895–900). Springer, Cham.
Sadriwala, M. F., & Sadriwala, K. F. (2022). Perceived usefulness and ease of
use of artificial intelligence on marketing innovation. International Jour-
nal of Innovation in the Digital Economy (IJIDE), 13(1), 1–10. [Link]
org/10.4018/IJIDE.292010
Sahai, S., & Goel, R. (2021). Impact of artificial intelligence in changing trends
of marketing. Applications of Artificial Intelligence in Business and Finance,
Modern Trends, 221.
Sajid, S., Haleem, A., Bahl, S., Javaid, M., Goyal, T., & Mittal, M. (2021). Data
science applications for predictive maintenance and materials science in
context to Industry 4.0. Materials Today: Proceedings, 45, 4898–4905.
Seranmadevi, R., & Senthil Kumar, A. (2019). Experiencing the AI emergence
in Indian retail – Early adopters approach. Management Science Letters,
9(1), 33–42. [Link]
Sha, S. N., & Rajeswari, M. (2019). Creating a brand value and consumer sat-
isfaction in e-commerce business using artificial intelligence. SSRN Elec-
tronic Journal. [Link]
Shah, D., & Shay, E. (2020). How and why artificial intelligence, mixed re-
ality, and blockchain technologies will change marketing we know
today. In Handbook of Advances in Marketing in an Era of Disrup-
tions: Essays in Honour of Jagdish N. Sheth (pp. 377–390). [Link]
org/10.4135/9789353287733.n32
Shahid, M. Z., & Li, G. (2019). Impact of artificial intelligence in marketing:
A perspective of marketing professionals of Pakistan. Global Journal of
Management And Business Research, 19(2). [Link]
[Link]/[Link]/GJMBR/article/view/2704
Sheth, J. (2019). How and why artificial intelligence, mixed reality, and block-
chain technologies will change marketing as we know today. In Essays in
Honour of Jagdish N. (pp. 377–390).
Siau, K., & Wang, W. (2018). Building trust in artificial intelligence, machine
learning, and robotics. Cutter Business Technology Journal, 31(2), 47–53.
Tan, J., Cherkauer, K. A., & Chaubey, I. (2016). Developing a comprehensive
spectral-biogeochemical database of midwestern rivers for water quality
retrieval using remote sensing data: A case study of the Wabash River
and its tributary, Indiana. Remote Sensing, 8(6). [Link]
rs8060517
Ahmet Songur | 293

Tan, T. F., & Ko, C. H. (2016). Application of artificial intelligence to cross-


screen marketing: A case study of AI technology company. In Proceedings
of the 2nd International Conference on Artificial Intelligence and Industrial
Engineering (pp. 517–519).
Tanase, C., & Cosmin, G. (2018). Artificial intelligence: Optimizing the expe-
rience of digital marketing. Romanian Distribution Committee Magazine,
9(1), 24–28. [Link]
Tchelidze, L. (2019). Potential and skill requirements of artificial intelligence in
digital marketing. Calitatea, 20(S3), 73–78.
Thiraviyam, T. (2018). Artificial intelligence marketing. International Journal of
Recent Research Aspects, 19(4), 449–452.
Turing, A. M. (1950). Computing machinery and intelligence. Mind, 49, 433-
460. [Link]
Rutskiy, V., Mousavi, R., Chudopal, N., Amrani, Y. E., Everstova, V., &
Tsarev, R. (2021). Artificial intelligence as a disruptive technology for
digital marketing. In Proceedings of the Computational Methods in Systems
and Software (pp. 895–900). Springer.
Vetterli, C., Uebernickel, F., Brenner, W., Petrie, C., & Stermann, D. (2016).
How Deutsche Bank’s IT division used design thinking to achieve cus-
tomer proximity. MIS Quarterly Executive, 15(1), 37–53.
Vishnoi, S. K., Bagga, T., Sharma, A., & Wani, S. N. (2018). Artificial intelli-
gence-enabled marketing solutions: A review. Indian Journal of Economics
& Business, 17(4), 167–177.
Vlačić, B., Corbo, L., Costa e Silva, S., & Dabić, M. (2021). The evolving
role of artificial intelligence in marketing: A review and research agenda.
Journal of Business Research, 128(February 2021), 187–203. [Link]
org/10.1016/[Link].2021.01.055
Vrontis, D., Christofi, M., Pereira, V., Tarba, S., Makrides, A., & Trichina, E.
(2022). Artificial intelligence, robotics, advanced technologies and hu-
man resource management: A systematic review. International Journal of
Human Resource Management, 33(6), 1237–1266.
Wei, R., Garcia, C., El-Sayed, A., Peterson, V., & Mahmood, A. (2020). Varia-
tions in variational autoencoders - A comparative evaluation. IEEE Access,
8, 153651–153670. [Link]
Wikipedia. (2025). Yapay zekâ. [Link]
Wirth, N. (2018). Hello marketing, what can artificial intelligence help you
with? International Journal of Market Research, 60(5), 435–438. https://
[Link]/10.1177/1470785318776841
Wirtz, J., Patterson, P. G., Kunz, W. H., Gruber, T., Lu, V. N., Paluch, S., &
Martins, A. (2018). Brave new world: Service robots in the frontline.
Journal of Service Management, 29(5), 907–931.
294 | Artificial Intelligence Marketing (AIM): Digital Transformation and Consumer Behaviour

Wisetsri Worakamol, Ragesh T., Catherene Julie Aarthy, Vibha Thakur, & Di-
gesh Pandey, K. G. (2021). Systematic analysis and future research dire-
ctions in artificial intelligence for marketing. Turkish Journal of Computer
and Mathematics Education (TURCOMAT), 12(11), 43–55. [Link]
org/10.17762/turcomat.v12i11.5825
Yablonsky, S. A. (2019). Multidimensional data-driven artificial intelligence in-
novation. Technology Innovation Management Review, 9(12), 16–28. htt-
ps://[Link]/10.22215/timreview/1288
Yang, Y., Liu, Y., Lv, X., Ai, J., & Li, Y. (2022). Anthropomorphism and cus-
tomers’ willingness to use artificial intelligence service agents. Journal of
Hospitality Marketing and Management, 31(1), 1–23. [Link]
1080/19368623.2021.1926037
Yawalkar, M. V. V. (2019). A study of artificial intelligence and its role in hu-
man resource management. International Journal of Research and Analyti-
cal Reviews (IJRAR), 6(1), 20–24.
Yenduri, G., Ramalingam, M., Selvi, G. C., Supriya, Y., Srivastava, G., Mad-
dikunta, P. K. R., Raj, G. D., Jhaveri, R. H., Prabadevi, B., Wang,
W., Vasilakos, A. V., & Gadekallu, T. R. (2024). GPT (Generative
Pre-Trained Transformer) - A comprehensive review on enabling tech-
nologies, potential applications, emerging challenges, and future direc-
tions. IEEE Access, 12(March), 54608–54649. [Link]
ACCESS.2024.3389497
Yoon, J. H., & Jang, B. (2023). Evolution of deep learning-based sequen-
tial recommender systems: From current trends to new perspecti-
ves. IEEE Access, 11(May), 54265–54279. [Link]
ACCESS.2023.3281981
Zhao, R., & Cai, Y. (2021). Research on online marketing effects based on mul-
ti-model fusion and artificial intelligence algorithms. Journal of Ambient
Intelligence and Humanized Computing, 1(17).
Chapter 16

Negative Effects of Artificial Intelligence On


Human Creativity Ability

Sibel Aydoğan1

Abstract
Artificial Intelligence (AI) is increasingly integrated into creativity and
innovation processes in the modern world. However, concerns have been
raised regarding its effects on human creativity. The automated content
generation provided by AI, its guidance in problem-solving processes, and its
facilitation of artistic production may negatively impact individuals’ creative
thinking capacities (Carr, 2020). By generating content through big data
analysis and algorithms, AI may restrict human creativity. Particularly in the
fields of art, writing, and design, the widespread use of AI-based tools may
diminish individuals’ abilities to generate original ideas. Some studies indicate
that individuals may become excessively dependent on AI suggestions, thereby
relegating their own creative processes to a secondary position (Kowalski,
2021). This phenomenon may lead to a decline in people’s creative problem-
solving skills and a reduction in innovative thinking.
Moreover, the tendency of AI-generated content to become homogenized
may result in a decrease in artistic and cultural diversity. AI systems learn
from past data to produce content, which can confine creative processes
within the patterns of the past (Smith & Anderson, 2022). One of the
fundamental elements of creativity, individual and societal originality, may be
compromised due to AI’s repetitive nature.
Finally, considering AI’s impact on problem-solving processes, it is suggested
that individuals’ critical thinking skills may deteriorate over time. The ability
of AI to provide fast and accurate solutions may weaken people’s habits of
inquiry and reduce their capacity to develop innovative solutions (McCarthy,
2023). In this context, AI is emphasized not as a tool that supports creative
processes but as a factor that may constrain them.

1 Doç. Dr., Marmara Üniversitesi, İşletme Fakültesi, İşletme bölümü, Pazarlama Anabilim Dalı
saydogan@[Link], ORCID ID: 0000-0002-4870-1901

[Link]
295
296 | Yapay Zekânın İnsanın Yaratıcılık Yeteneği Üzerine Olumsuz Etkileri

1. The Concept of Creativity and Human Creativity Ability


Creativity is defined as the capacity of individuals to generate new and
original ideas, solve problems, and develop innovative solutions (Runco &
Jaeger, 2012; Kaufman & Sternberg, 2010). Traditionally, human creativity
has been associated with insight, experience, emotional intelligence, and
conscious problem-solving processes (Sternberg & Lubart, 1995). However,
advancements in artificial intelligence (AI) have reached a level where
human intervention in creative processes may be reduced (McCormack &
d’Inverno, 2012).
Human creativity is shaped by cognitive flexibility, experience, and
sensory inputs (Amabile, 1996). Creativity serves as the foundation of
innovation in various fields, including art, science, engineering, and
business (Csikszentmihalyi, 1996). However, with the increasing influence
of AI, the nature of creativity and human contribution is being questioned.
Particularly, as AI is increasingly utilized in creative production processes,
concerns have arisen regarding how individual creativity will be shaped in
the future (Boden, 2009).
Creativity has become a shared research area among disciplines such
as cognitive sciences, psychology, neuroscience, and education sciences.
Generally, creativity refers to an individual’s capacity to generate innovative
solutions within a specific context. Sternberg and Lubart (1999) consider
creativity as a multidimensional phenomenon, emphasizing that cognitive
processes, personal traits, and environmental factors contribute to this process.
Guilford (1950) defined creativity as “divergent thinking,” highlighting
the importance of individuals’ ability to think outside the norm, generate
diverse ideas, and approach problems from multiple perspectives. Torrance
(1966) developed a test to evaluate creativity based on an individual’s ability
to generate ideas, exhibit flexibility, originality, and elaboration.
The association theory developed by Mednick (1962) posits that creative
individuals are better at forming remote associations, which enhances their
problem-solving ability. These theories indicate that creativity is not merely
an individual trait but is also shaped by environmental and cognitive factors.

1.1. The Cognitive Foundations of Creativity


When examining the cognitive processes underlying creativity, it becomes
evident that creativity is closely related to memory, problem-solving, and
association mechanisms. The “geneplore model” proposed by Finke,
Ward, and Smith (1992) suggests that creative processes are linked to the
restructuring of mental representations.
Sibel Aydoğan | 297

The creative thinking process is generally associated with two


fundamental thinking styles: divergent thinking and convergent thinking
(Guilford, 1950). Divergent thinking involves generating multiple different
ideas, while convergent thinking refers to the process of refining these ideas
into the most effective solution. Baer (1993) argues that creative individuals
effectively utilize both thinking styles to produce innovative solutions.

1.2. The Neuroscientific Foundations of Creativity


Recent neuroscience studies have demonstrated that creativity is
associated with specific brain regions. A study conducted by Beaty, Benedek,
Silvia, and Schacter (2016) revealed that creativity is linked to the prefrontal
cortex, posterior cingulate cortex, and the default mode network (DMN).
Neuroimaging studies indicate that the prefrontal cortex plays an active
role in creative thinking and enhances cognitive flexibility in problem-solving
processes (Jung et al., 2013). Specifically, the right prefrontal cortex has
been found to be effective in generating metaphors and connecting remote
associations (Abraham, 2013).
Furthermore, neurotransmitter systems are significant biological factors
influencing creativity. For example, higher dopamine levels have been
observed to enhance creative performance (Chermahini & Hommel, 2012).

1.3. Psychological Factors and Personality Traits


Psychological research has shown that creativity is linked to specific
personality traits. According to the Five-Factor Personality Model developed
by Costa and McCrae (1992), individuals with high “openness to experience”
scores tend to be more creative.
Csikszentmihalyi (1996) identified the “flow experience” as a
psychological factor that enhances creativity. This concept refers to a mental
state in which an individual becomes fully immersed in an activity, losing
track of time. Creative individuals enter the flow state more easily and
exhibit high motivation during this process.
Additionally, stress, anxiety, and psychological pressure have been shown
to negatively affect creative thinking. Amabile (1996) argues that external
rewards can suppress the creative process, and intrinsic motivation is a crucial
factor in fostering creativity.
Human creativity is a complex ability shaped by cognitive processes,
neuroscientific mechanisms, psychological factors, and environmental
influences. Academic and scientific research suggests that creativity can be
298 | Yapay Zekânın İnsanın Yaratıcılık Yeteneği Üzerine Olumsuz Etkileri

developed through both individual and environmental factors. Adopting


strategies that encourage creative thinking in education can enhance
individuals’ capacity to generate innovative solutions, contributing to
societal progress.

2. Factors That Negatively Affect Creativity in Artificial


Intelligence
Artificial intelligence (AI) refers to technologies developed to assist
human cognitive processes, solve problems, and enhance productivity
(Russell & Norvig, 2020). However, the increasing application of AI in
creative fields has sparked debates on its potential negative impact on human
creativity (Boden, 2004).
While some researchers argue that AI can support creative processes,
others contend that it may weaken human capacity for original thinking and
innovation (Autor, 2023; Brynjolfsson & McAfee, 2017; Kaplan & Haenlein,
2019; Bostrom, 2014). Although this study focuses on the negative effects
of AI on human creativity, it is also important to acknowledge research
suggesting that AI can support creativity rather than harm it (Florida, 2002).
Some studies argue that AI’s ability to handle repetitive and time-consuming
tasks may allow humans to focus more on creative processes (Smith, 2021).
However, despite such optimistic perspectives, there is a broad academic
consensus that AI could have detrimental effects on human creativity (Carr,
2020; Müller, 2021).
AI’s impact on creative processes and its long-term effects on human
innovation capacity are being increasingly examined. The following sections
explore AI’s negative effects on human creativity from different perspectives.
2.1. Encouraging Cognitive Laziness AI automation can disengage
individuals from problem-solving and thinking processes (Carr, 2010;
Kahneman, 2011; Sparrow, Liu & Wegner, 2011). People may avoid
complex cognitive processes and prefer ready-made solutions over creative
thinking (Nickerson, 1999). Furthermore, AI’s easy access to information
may promote superficial learning instead of deep understanding (Ward,
2007), limiting individuals’ analytical and critical thinking abilities.
2.2. Reduction in Individual Originality and Diversity AI operates
by analyzing large datasets and following established patterns, often leading
to repetitive and predictable creative outputs (Boden, 2004; Shneiderman,
2007; Miller, 2019). The widespread use of AI in digital content creation
may reduce artistic diversity and individuality (McLuhan, 1964). Particularly
Sibel Aydoğan | 299

in literature and art, increasing reliance on AI may diminish originality and


diversity (Manovich, 2013).

2.3. Creative Dependency in Human-AI Collaboration


AI can be used as an assistant in creative processes. However, this
collaboration may gradually transform into dependency, weakening
individuals’ capacity to generate original content (Smith & Anderson, 2019;
Colton & Wiggins, 2012). In the production of art, music, and written
content, the active role of humans is being replaced by an increasingly guiding
role of AI (Boden, 2010). Particularly in the media and entertainment
industries, the use of artificial intelligence is causing traditional creative
processes to be replaced by algorithms (McLuhan, 1964). This situation
could lead to issues in employment, intellectual property rights, and ethics.

2.4. Ethical and Ownership Issues


The ownership of content produced by AI brings about ethical and
legal concerns. The degree of originality of works created by AI and the
human contribution involved are subjects of debate (Floridi & Sanders,
2004; Gunkel, 2020). Furthermore, it may create economic difficulties for
artists and writers (Lessig, 2004; Zittrain, 2008). Legal uncertainties persist
regarding the ownership of AI-generated works, and this situation could
adversely affect the creative industries (Samuelson, 2019).

2.5. Threats to Originality and Individuality


The development of AI in fields such as art, music, literature, and design
may standardize creative production, thereby diminishing individuality
and originality (Boden, 2004). For instance, AI-supported software and
algorithms can generate new content based on data-driven predictions;
however, since these contents are often combinations of past data, their level
of originality is limited (Colton & Wiggins, 2012).

2.6. Weakening of Cognitive Processes that Support Human


Creativity
The assumption of creative tasks by AI can lead to the deterioration
of individuals’ problem-solving, critical thinking, and innovative idea
generation skills (Carr, 2020). For example, AI models that automatically
produce content may lead individuals to turn to pre-made content rather
than formulating their own ideas (Müller, 2021).
300 | Yapay Zekânın İnsanın Yaratıcılık Yeteneği Üzerine Olumsuz Etkileri

2.7. Commercialization and Homogenization of Creativity


The widespread use of AI can accelerate production processes in art
and design, but it may also lead to the creation of content that conforms
to specific patterns in order to increase its marketability (Elgammal et al.,
2017). This may result in the prominence of repetitive and commercially
viable content, rather than originality, in art and design (Manovich, 2018).

2.8. Decreased Human Involvement and the Passivization of


Creativity
The integration of AI into creative processes may lead to the increasing
passivization of human creativity. For example, software that generates
content automatically may reduce individuals’ direct participation in creative
processes, resulting in creative experiences becoming superficial (Boden,
2018).

2.9. Loss of Depth and Meaning in Art and Cultural Production


Content produced by AI is often data-driven and superficial, lacking
human experience and emotions (Chollet, 2019). This may lead to a reduction
in the depth of meaning in artistic production and the mechanization of
cultural values (Guzdial et al., 2022).

3. Examples from Different Fields


To better understand and concretize the negative effects of artificial
intelligence on human creativity, it will be useful to provide examples from
different fields. Below are some of the negative examples categorized by
industry.
Academic and Literary Content Production: AI-based text generation
systems weaken originality by influencing academic and creative writing
(Bender et al., 2021). Systems like ChatGPT can automate the production
of knowledge, making the creative process mechanical (Marcus & Davis,
2019). In particular, the production of academic work using AI could
negatively affect scientific creativity and lead to new debates regarding
research ethics (Floridi, 2021).
The Impact of AI on Advertising and Marketing: AI-supported
advertising and content production have changed creative decision-making
processes in the marketing industry (Davenport & Ronanki, 2018).
Advertisement strategies driven by algorithms have led to a decrease in
original marketing campaigns (Huang & Rust, 2021). This situation may
reduce the role of creative professionals in marketing and advertising and
Sibel Aydoğan | 301

lead to the widespread use of standardized content (Kaplan & Haenlein,


2019).
Standardization in Literature and Content Production: AI-supported
writing tools such as GPT-4 and Jasper AI can generate novels and poems.
However, these tools follow existing patterns rather than creative thinking,
relying on large datasets. For instance, in a Japanese literary competition
held in 2021, a science fiction novel written by AI was noted for being
“devoid of creativity” (Sugimoto et al., 2021).
Decreased Innovation in Fashion and Design Industries: AI-
supported fashion design platforms, such as Google’s DeepFashion AI,
often produce designs that repeat past trends, thus reducing individuality
(Kim & Park, 2019).
Homogenization in Art and Visual Design: In AI-supported art
production, the originality and human contribution are questioned
(Elgammal et al., 2017). For example, paintings and music produced by AI
change the role of the artist and lead to the mechanization of the creative
process (McCormack et al., 2019). AI-supported art production platforms
(such as DeepDream, DALL·E, and MidJourney) create works by imitating
specific artistic styles. In 2022, when an artwork created by AI won first
place at the Colorado State Fair, artists argued that creativity was under
threat (Vincent, 2022).
Loss of Originality in Music Production: AI systems like Aiva and
Jukebox (OpenAI) can compose music without human intervention.
However, these systems can stifle innovation by generating new songs based
on the analysis of previous compositions (Hertzmann, 2020). The growing
adoption of AI-generated music and visual art increasingly complicates
competition for artists and threatens artistic originality (Boden, 2010).
Use of Artificial Intelligence in Film and Scriptwriting: Production
companies such as Netflix and Warner Bros. are testing AI-supported script
analysis systems. However, these systems may limit creativity by repeating
successful formulas (Shaw, 2020).

Conclusion and Recommendations


The role of AI in creative processes should be addressed in a balanced
way, and policies should be developed to preserve human creative potential.
Educational systems must be restructured to promote critical thinking and
creative problem-solving skills. Furthermore, ethical and legal regulations
should be clarified regarding AI-supported content production (Brynjolfsson
302 | Yapay Zekânın İnsanın Yaratıcılık Yeteneği Üzerine Olumsuz Etkileri

et al., 2018). The use of AI as a supportive tool in creative processes should


be regulated in such a way that it does not hinder human creativity.
The effects of artificial intelligence on human creativity are a subject that
needs to be addressed from both positive and negative perspectives. However,
the existing literature reveals that AI has developed various mechanisms that
threaten human creativity. The following recommendations can be made to
mitigate these negative effects:
• AI should only be integrated into human creative processes as a
supportive tool,
• Emphasis should be placed on critical thinking and problem-solving
skills in creativity education,
• Ethical guidelines should be established to preserve human creativity
in fields such as art, design, and literature.
Sibel Aydoğan | 303

References
Abraham, A. (2013). The brain and creativity. Behavioral and Brain Sciences,
36(3), 247-249. [Link]
Abraham, A. (2013). The brain and creativity. Behavioral and Brain Sciences,
36(3), 247-249. [Link]
Amabile, T. M. (1996). Creativity in Context. Westview Press.
Autor, D. (2023). The Work of the Future: Building Better Jobs in an Age of Intel-
ligent Machines. MIT Press.
Baer, J. (1993). Creativity and divergent thinking: A task-specific approach. Law-
rence Erlbaum Associates.
Beaty, R. E., Benedek, M., Silvia, P. J., & Schacter, D. L. (2016). Creative cog-
nition and the brain: A latent variable approach. NeuroImage, 128, 135-
145. [Link]
Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, J. (2021). On the
dangers of stochastic parrots: Can language models be too big? Proceedin-
gs of the 2021 ACM Conference on Fairness, Accountability, and Transparen-
cy, 610-623. [Link]
Boden, M. A. (2004). The Creative Mind: Myths and Mechanisms. Routledge.
Boden, M. A. (2009). The creative mind: Myths and mechanisms (2nd ed.).
Routledge.
Boden, M. A. (2010). Creativity and artificial intelligence: The rise of the machi-
nes. In R. K. M. Lee & G. M. G. Henson (Eds.), Creativity and AI (pp.
55-78). Springer.
Boden, M. A. (2018). Artificial intelligence and creativity: A cautious approach.
Artificial Intelligence, 264, 1-7.
Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. Oxford Univer-
sity Press.
Brynjolfsson, E., & McAfee, A. (2017). Machine, Platform, Crowd: Harnessing
Our Digital Future. W. W. Norton & Company.
Brynjolfsson, E., Hui, X., & Smith, M. D. (2018). Artificial intelligence in the
workplace: The impact on labor and productivity. In P. C. H. L. W. C. H. J.
P. (Ed.), The economics of artificial intelligence: An agenda. University of
Chicago Press.
Carr, N. (2010). The Shallows: What the Internet Is Doing to Our Brains. W. W.
Norton & Company.
Carr, N. (2020). The shallows: What the Internet is doing to our brains. W. W. Nor-
ton & Company.
Chermahini, S. A., & Hommel, B. (2012). Dopamine and creativity: Evidence
from the dopamine receptor D2 gene. Psychopharmacology, 223(3), 345-
352. [Link]
304 | Yapay Zekânın İnsanın Yaratıcılık Yeteneği Üzerine Olumsuz Etkileri

Chollet, F. (2019). Deep learning with Python. Manning Publications.


Colton, S., & Wiggins, G. A. (2012). Computational creativity: The final fron-
tier? Proceedings of the 20th European Conference on Artificial Intelligence,
21(1), 21-26.
Costa, P. T., & McCrae, R. R. (1992). The Revised NEO Personality Inventory
(NEO-PI-R). In P. T. Costa & T. A. Widiger (Eds.), Personality disorders
and the five-factor model of personality (pp. 179-200). American Psycho-
logical Association.
Csikszentmihalyi, M. (1996). Creativity: Flow and the Psychology of Discovery and
Invention. HarperCollins.
Davenport, T. H., & Ronanki, R. (2018). Artificial intelligence for the real wor-
ld. Harvard Business Review, 96(1), 108-116. [Link]
artificial-intelligence-for-the-real-world
Elgammal, A., Liu, B., Elhoseiny, M., & Mazzone, M. (2017). CAN: Creative
adversarial networks generating “art” by learning about styles and devia-
ting from style norms. ArXiv preprint arXiv:1706.07068.
Finke, R. A., Ward, T. B., & Smith, S. M. (1992). Creative cognition: Theory,
research, and applications. MIT Press.
Florida, R. (2002). The rise of the creative class. Basic Books.
Floridi, L. (2021). The Ethics of Artificial Intelligence. Oxford University Press.
Floridi, L., & Sanders, J. W. (2004). On the morality of artifici-
al agents. Minds and Machines, 14(3), 349-379. [Link]
org/10.1023/B:MIND.0000036464.27597.9a
Guilford, J. P. (1950). Creativity and its measurement. Psychological Bulletin,
47(5), 397-407. [Link]
Gunkel, D. J. (2020). How to Survive the Digital Apocalypse. Routledge.
Guzdial, M., Liao, N., & Riedl, M. O. (2022). AI and storytelling: A new fron-
tier. Artificial Intelligence Journal, 307, 103-123.
Hertzmann, A. (2020). Can computers create art? Arts, 9(3), 87. [Link]
org/10.3390/arts9030087
Huang, M.-H., & Rust, R. T. (2021). AI in service: Advances and futu-
re directions. Journal of Service Research, 24(1), 3-25. [Link]
org/10.1177/1094670520902298
Jung, R. E., Mead, B. S., Carrasco, M., & Flores, L. E. (2013). The neural cor-
relates of creative thinking. Neuropsychology, Development, and Cognition,
19(5), 599-611. [Link]
Kahneman, D. (2011). Thinking, Fast and Slow. Farrar, Straus and Giroux.
Kaplan, A., & Haenlein, M. (2019). Rethinking AI: How Artificial Intelligence
Will Revolutionize the Future. Business Horizons.
Sibel Aydoğan | 305

Kaufman, J. C., & Sternberg, R. J. (2010). The Cambridge handbook of creativity.


Cambridge University Press.
Kim, J., & Park, J. (2019). AI in the fashion industry: Impacts on creativity and
innovation. Fashion and Textile Studies, 7(1), 34-52. [Link]
080/13527269.2019.1656312
Kowalski, J. (2021). The impact of AI on creativity: How dependence on artificial in-
telligence affects the creative process. Journal of Artificial Intelligence and Cre-
ativity, 15(3), 45-59. [Link]
Lessig, L. (2004). Free culture: How big media uses technology and the law to lock
down culture and control creativity. Penguin Press.
Manovich, L. (2013). Software Takes Command. Bloomsbury Academic.
Manovich, L. (2018). AI aesthetics. Strelka Press.
Marcus, G., & Davis, E. (2019). Rebooting AI: Building artificial intelligence we
can trust. Pantheon Books.
McCarthy, T. (2023). The impact of AI on problem-solving and innovation: How
rapid solutions may hinder critical thinking. Journal of Technology and In-
novation, 19(4), 210-225. [Link]
McCormack, J., & d’Inverno, M. (2012). Computers and creativity. Springer.
McCormack, J., Hutchings, P., & Hutchings, P. (2019). Autonomy, Authen-
ticity, Authorship and Intention in Computer Generated Art. In Procee-
dings of the ICCC 2019: 11th International Conference on Computational
Creativity. 191-198. [Link]
McLuhan, M. (1964). Understanding Media: The Extensions of Man. MIT Press.
Mednick, S. A. (1962). The associative basis of the creative process. Psychological
Review, 69(3), 220-232. [Link]
Miller, A. (2019). Artificial intelligence and creativity: The risks of predictable
thinking. Journal of Digital Innovation, 12(4), 245-258. [Link]
g/10.1016/[Link].2019.04.012
Müller, V. C. (2021). Ethics of artificial intelligence and robotics. Stanford En-
cyclopedia of Philosophy.
Nickerson, R. S. (1999). Enhancing creativity. In M. A. Runco & S. R. Pritzker
(Eds.), The Encyclopedia of Creativity (Vol. 1, pp. 631-635). Academic
Press.
Runco, M. A., & Jaeger, G. J. (2012). The standard definition of creativity. Cre-
ativity Research Journal, 24(1), 92-96.
Russell, S., & Norvig, P. (2020). Artificial intelligence: A modern approach.
Pearson.
Samuelson, P. (2019). The legal status of AI-generated works and their implications
for copyright law. Journal of Intellectual Property, 27(4), 469-485. https://
[Link]/10.1080/09593988.2019.1629890
306 | Yapay Zekânın İnsanın Yaratıcılık Yeteneği Üzerine Olumsuz Etkileri

Shaw, D. (2020). Hollywood and AI: The role of machine learning in scrip-
twriting. Entertainment Research Journal, 25(4), 78-96. [Link]
g/10.1016/[Link].2020.07.003
Shneiderman, B. (2007). Creativity Support Tools: Accelerating Discovery and In-
novation. Communications of the ACM.
Smith, A., & Anderson, B. (2019). Artificial intelligence and creativity: Challen-
ges and opportunities. Journal of Technology and Innovation, 18(3), 212-
223. [Link]
Smith, A., & Anderson, B. (2022). The homogenization of content in AI-generated
creations: The impact on artistic and cultural diversity. Journal of Digital Arts
and Culture, 8(2), 112-126. [Link]
Smith, E. (2021). The role of AI in creative industries. Journal of Digital Crea-
tivity, 12(3), 45-67.
Sparrow, B., Liu, J., & Wegner, D. M. (2011). Google effects on memory:
Cognitive consequences of having information at our fingertips. Science,
333(6043), 776-778. [Link]
Sternberg, R. J., & Lubart, T. I. (1995). Defying the crowd: Cultivating creativity
in a culture of conformity. Cambridge University Press.
Sternberg, R. J., & Lubart, T. I. (1999). The concept of creativity: Prospects and
paradigms. Handbook of Creativity, 3-15. Cambridge University Press.
Sugimoto, C. R., Larivière, V., Ni, C., & Cronin, B. (2021). AI in literatu-
re: The rise of machine-generated storytelling. Journal of Writing Studies,
34(2), 145-161.
Torrance, E. P. (1966). The Torrance tests of creative thinking. Personnel Press.
Vincent, J. (2022). AI-generated art won a state fair competition, and artists are
furious. The Verge. [Link]
Ward, T. B. (2007). Creative cognition and the role of knowledge. In M. A. Runco
& S. R. Pritzker (Eds.), Encyclopedia of creativity (Vol. 2, pp. 385-392).
Elsevier.
Zittrain, J. (2008). The Future of the Internet and How to Stop It. Yale University
Press.

You might also like