The current issue and full text archive of this journal is available on Emerald Insight at:
[Link]
JICES
22,1 Ethical and legal challenges of
AI in marketing: an exploration
of solutions
124 Dinesh Kumar
Mittal School of Business, Faculty of Business and Arts,
Received 17 May 2023 Lovely Professional University, Phagwara, India, and
Revised 8 October 2023
Accepted 11 December 2023
Nidhi Suthar
Department of Administration, Pomento IT Services, Hisar, India
Abstract
Purpose – Artificial intelligence (AI) has sparked interest in various areas, including marketing. However,
this exhilaration is being tempered by growing concerns about the moral and legal implications of using AI in
marketing. Although previous research has revealed various ethical and legal issues, such as algorithmic
discrimination and data privacy, there are no definitive answers. This paper aims to fill this gap by
investigating AI’s ethical and legal concerns in marketing and suggesting feasible solutions.
Design/methodology/approach – The paper synthesises information from academic articles, industry
reports, case studies and legal documents through a thematic literature review. A qualitative analysis
approach categorises and interprets ethical and legal challenges and proposes potential solutions.
Findings – The findings of this paper raise concerns about ethical and legal challenges related to AI in the
marketing area. Ethical concerns related to discrimination, bias, manipulation, job displacement, absence of
social interaction, cybersecurity, unintended consequences, environmental impact, privacy and legal issues
such as consumer security, responsibility, liability, brand protection, competition law, agreements, data
protection, consumer protection and intellectual property rights are discussed in the paper, and their potential
solutions are discussed.
Research limitations/implications – Notwithstanding the interesting insights gathered from this
investigation of the ethical and legal consequences of AI in marketing, it is important to recognise the limits of
this research. Initially, the focus of this study is confined to a review of the most important ethical and legal
issues pertaining to AI in marketing. Additional possible repercussions, such as those associated with
intellectual property, contracts and licencing, should be investigated more deeply in future studies. Despite
the fact that this study gives various answers and best practices for tackling the stated ethical and legal
concerns, the viability and efficacy of these solutions may differ depending on the context and industry. Thus,
more research and case studies are required to evaluate the applicability and efficacy of these solutions in
other circumstances. This research is mostly based on a literature review and may not represent the
experiences or opinions of all stakeholders engaged in AI-powered marketing. Further study might involve
interviews or surveys with marketing professionals, customers and other key stakeholders to offer a full
knowledge of the practical difficulties and solutions. Because of the rapid speed of technical progress, AI’s
ethical and regulatory ramifications in marketing are continually increasing. Consequently, this work should
be a springboard for more research and continuing conversations on this subject.
Funding and/or conflicts of interests/competing interests: The authors declare that there is no conflict
of interest associated with the publication of this paper. The authors did not receive any financial
Journal of Information,
Communication and Ethics in
support or funding that could potentially create a conflict of interest. The authors also confirm that
Society they have no competing interests, financial or otherwise, that could bias the results or interpretation
Vol. 22 No. 1, 2024
pp. 124-144 of the data presented in this paper. The authors declare that they have conducted the research in an
© Emerald Publishing Limited ethical and professional manner, and they have not engaged in any practices that could potentially
1477-996X
DOI 10.1108/JICES-05-2023-0068 compromise the integrity of the research findings.
Practical implications – This study’s findings have several practical implications for marketing Ethical and
professionals. Emphasising openness and explainability: Marketing professionals should prioritise
transparency in their use of AI, ensuring that customers are fully informed about data collection and legal challenges
utilisation for targeted advertising. By promoting openness and explainability, marketers can foster customer of AI
trust and avoid the negative consequences of a lack of transparency. Establishing ethical guidelines:
Marketing professionals need to develop ethical rules for the creation and implementation of AI-powered
marketing strategies. Adhering to ethical principles ensures compliance with legal norms and aligns with the
organisation’s values and ideals. Investing in bias detection tools and privacy-enhancing technology: To
mitigate risks associated with AI in marketing, marketers should allocate resources to develop and implement 125
bias detection tools and privacy-enhancing technology. These tools can identify and address biases in AI
algorithms, safeguard consumer privacy and extract valuable insights from consumer data.
Social implications – This study’s social implications emphasise the need for a comprehensive
approach to address the ethical and legal challenges of AI in marketing. This includes adopting a
responsible innovation framework, promoting ethical leadership, using ethical decision-making
frameworks and conducting multidisciplinary research. By incorporating these approaches, marketers
can navigate the complexities of AI in marketing responsibly, foster an ethical organisational culture,
make informed ethical decisions and develop effective solutions. Such practices promote public trust,
ensure equitable distribution of benefits and risk, and mitigate potential negative social consequences
associated with AI in marketing.
Originality/value – To the best of the authors’ knowledge, this paper is among the first to explore
potential solutions comprehensively. This paper provides a nuanced understanding of the challenges by
using a multidisciplinary framework and synthesising various sources. It contributes valuable insights for
academia and industry.
Keywords Artificial intelligence, Marketing, Ethical challenges, Legal implications,
AI in marketing, Solution proposals
Paper type Research paper
1. Introduction
Artificial intelligence (AI) is a subfield of computer science that seeks to develop
computational systems capable of displaying intelligent behaviour. AI is the development of
machines capable of performing tasks that traditionally require human intelligence (Russell
and Norvig, 2010; Poole et al., 1998). Several industries are profoundly affected by the AI.
For example, in the field of health care, it plays a crucial role in facilitating diagnosis and
personalised treatment (Topol, 2019). In the financial sector, it is used for risk assessment
and detecting fraudulent activities (Arner et al., 2016; Gerke et al., 2020; Currie and Hawk,
2021).
The advantages of AI are well known, but it also raises numerous ethical and legal
issues, including data privacy, prejudice and discrimination (Jobin et al., 2019; Hagendorff,
2020; Borenstein and Howard, 2021).
Like other areas where AI has had a huge impact, AI also transforms marketing.
According to Li and Karahanna (2015), AI offers unprecedented potential for analysing
client behaviour, facilitating more efficient and targeted marketing strategies and enabling
automated human interactions. AI algorithms are capable of efficiently analysing vast
amounts of data. Automation of data leads to the production of useful insights that can be
used to optimise marketing efforts. Moreover, these algorithms can automate routine duties
and engage in real-time client communication, attaining unprecedented personalisation and
operational efficiency.
However, the accelerated adoption of AI in the marketing industry raises significant
ethical and legal concerns. Concerns regarding data privacy, user consent and the possibility
of algorithmic bias are merely a subset of the larger problems at hand. The increasing
reliance of businesses on AI for marketing campaigns necessitates the effective and
JICES appropriate management of ethical and legal issues associated with this technology to
22,1 ensure its successful application.
This article explores the ethical and legal complexities of integrating AI in marketing.
The paper identifies these challenges and proposes actionable solutions for responsible AI
use in marketing. The goal is to guide practitioners and policymakers in navigating the
ethical and legal landscape of AI-driven marketing.
126
2. Literature review
The advent of AI has revolutionised marketing, offering many opportunities for businesses
to engage with consumers in more personalised and effective ways (Eriksson et al., 2019). AI
algorithms can analyse consumer behaviour, tailor marketing messages and optimise
marketing campaigns, enhancing the consumer experience and reducing marketing costs
(Davenport and Ronanki, 2018). However, using AI in marketing has ethical and legal
challenges.
One of the most pressing ethical concerns in AI marketing is data privacy. AI systems
often rely on extensive data collection and processing, which raises significant privacy and
security issues (Mittelstadt et al., 2016). Transparency is another critical ethical concern. The
algorithms used in AI are often complex and opaque, making it difficult to understand how
decisions are made and who is accountable for them (Mittelstadt et al., 2019). Moreover, AI
systems can perpetuate social biases, leading to discriminatory outcomes (Buolamwini and
Gebru, 2018).
From a legal perspective, AI faces data privacy, intellectual property and consumer
protection challenges in marketing. Existing data protection laws, such as the European
Union’s General Data Protection Regulation (GDPR; 2023), GDPR imposes limitations on
collecting, processing and using personal data (Van Ooijen and Vrabec, 2019). Intellectual
property laws also play a crucial role in regulating AI in marketing (Jain, 2021). Consumer
protection laws, such as the Federal Trade Commission Act in the USA, govern marketing
activities to ensure they do not deceive or mislead consumers (Helveston, 2015).
Various solutions have been proposed to address the ethical and legal challenges posed
by AI in marketing. These include developing ethical guidelines, using privacy-enhancing
technologies and creating transparent and interpretable AI systems (Eriksson et al., 2019). In
addition, there is a call for establishing legal and regulatory frameworks tailored to the
unique challenges AI poses in marketing (European Commission, 2021).
Although there has been substantial research on the ethical and legal dimensions of AI
across various sectors like health care and finance (Gerke et al., 2020; Jobin et al., 2019; Arner
et al., 2016), there is a noticeable gap in focused research within the marketing domain. Most
studies have concentrated on sectors where AI has a more established presence, often
overlooking the unique ethical and legal challenges in marketing, such as consumer
persuasion and data analytics for profit maximisation.
Moreover, the existing literature often proposes generalised solutions that may not
directly apply to the marketing context. Research is needed to explore the ethical and legal
challenges specific to AI in marketing and propose solutions tailored to this domain.
3. Methodology
3.1 Research design
This paper adopts a conceptual research approach to explore the ethical and legal challenges
of using AI in marketing. Given the multifaceted nature of these challenges, a broad focus is
essential for providing a comprehensive overview.
3.2 Data collection procedure Ethical and
The data collection strategy chiefly relied on secondary sources for this conceptually driven legal challenges
paper. A meticulous search spanning multiple databases like PubMed, Google Scholar and
JSTOR was executed. Keywords such as “AI in marketing”, “ethical challenges of AI” and
of AI
“legal challenges of AI” guided the search. The selection of sources was strictly based on
their pertinence to AI’s ethical and legal dimensions, focusing on peer-reviewed academic
articles, industry reports from esteemed organisations, pertinent case studies and relevant
legal documents. Any source that failed to meet these criteria was omitted from the study. 127
Regarding the timeframe, the emphasis was on sourcing materials published in the past
five years to maintain the study’s relevance and timeliness. However, this did not preclude
the inclusion of seminal works that offer foundational insights, regardless of their
publication date. To ascertain the reliability of the data, each source underwent a validation
process that considered the publishing outlet’s reputation, the author’s credentials and the
methodological rigour of the study or report.
Data extraction involved carefully culling pertinent information like key findings,
theoretical frameworks and suggested solutions, which were subsequently organised
themes to facilitate analysis. Ethical considerations were straightforward; given that the
paper uses publicly accessible secondary data, no ethical approval was deemed necessary.
Nonetheless, proper citation practices have been adhered to, ensuring that due credit is
given to the original authors.
3.3 Data analysis
A thematic analysis was used to identify recurring themes and patterns within the collected
data. This involved carefully reading and re-reading the selected sources to code relevant
information. The coded data were then grouped into broader themes related to ethical and
legal challenges and proposed solutions. This thematic structure facilitated a more
organised and coherent interpretation of the data.
4. Results and discussion
4.1 Ethical implications of artificial intelligence in marketing, challenges and solutions
AI has the potential to make marketing techniques far more efficient and successful. Yet, the
expanding use of AI in marketing has highlighted ethical problems that must be addressed
to guarantee that this technology is handled responsibly and ethically. This section includes
ethical issues surrounding the use of AI in marketing and their solutions.
4.1.1 Discrimination and bias. An urgent ethical concern when using AI in marketing is
the potential for bias and discrimination. For instance, it has been shown that AI algorithms
used in targeted advertising show high-paying job ads to males more often than to women,
maintaining gender prejudice (Datta et al., 2014). AI programs are trained on past data,
which may already be biased due to human nature. For instance, AI systems may
unintentionally reinforce racial or gender biases if biased training data were used (Hajian
et al., 2016).
Such skewed results lead to unfair treatment and present legal dangers. Under anti-
discrimination rules, businesses that use prejudiced AI algorithms may be subject to legal
penalties, including monetary fines (Zarsky, 2016). To reduce these risks, companies may
use bias detection approaches (Hajian et al., 2016) and guarantee algorithmic transparency
(Wachter et al., 2017). The creation of diverse expert teams may also aid in the development
of AI systems that are more equal (Gebru et al., 2021).
The use of adversarial learning techniques is one such strategy. These techniques teach
the AI system to identify and rectify its biases by purposefully including biased occurrences
JICES in the training data (Zhao et al., 2017). It takes constant attention to combat prejudice; it
22,1 cannot be done once. They must be continuously monitored and updated to guarantee that
AI algorithms do not reinforce preexisting biases (Danks and London, 2017).
4.1.2 Violations of privacy. Invasion of privacy remains a significant ethical concern in
the application of AI in marketing. The capability of AI algorithms to amass and analyse
extensive sets of personal data, such as browser histories and purchasing behaviours, has
128 led to heightened concerns about data privacy (Staicu et al., 2016; Wirtz et al., 2023).
A major issue is the lack of consumer awareness regarding how their data is being used.
This lack of transparency can lead to unauthorised data sharing or misuse (Wirtz et al.,
2023). Exploiting sensitive consumer data for targeted marketing can result in detrimental
outcomes such as identity theft or emotional manipulation. Moreover, it can lead to legal
repercussions under privacy laws like the European Union’s GDPR (European Parliament,
2016).
Customers can enhance their privacy in AI applications by adopting a multifaceted
strategy. This includes creating robust passwords, reviewing privacy policies to
comprehend how their data is used (Jin, 2018) and enabling two-factor authentication for an
added layer of security. Furthermore, it is imperative to regularly update software to
incorporate the latest security features and to exercise caution when divulging sensitive
information (Wang et al., 2021). Users are also advised to opt out of unnecessary data
collection, use secure connections and stay abreast of the latest developments in privacy and
AI (Kamarinou et al., 2017). Although these practices can mitigate risks, it is essential to
remember that no system offers 100% reliability, necessitating ongoing vigilance.
To mitigate these challenges, companies can use privacy-enhancing technologies like
differential privacy (Erlingsson et al., 2019). Data collection and usage transparency are also
crucial (OECD, 2019). Companies can take proactive steps to educate consumers on
managing their data. For instance, they can provide easy-to-understand guidelines on what
data types are collected and how to opt out or delete such data (Wirtz et al., 2023).
Engagement with lawmakers to establish clear norms around data acquisition and usage is
also vital for ensuring consumer privacy (Wirtz et al., 2023).
4.1.3 Manipulating customer behaviour. Algorithms based on AI are used to analyse
customer behaviour and preferences to create marketing campaigns tailored to specific
customers, which may increase the effectiveness of marketing efforts. This may lead to
manipulating and abusing vulnerable clients, which presents ethical concerns. This section
discusses the ethical challenges, difficulties and solutions related to AI-powered marketing’s
manipulation of client behaviour. Customers who are prone to manipulation of their
behaviour may be exploited. Individuals may be targeted by AI algorithms based on their
psychological profile and preferences, resulting in personalised and persuasive marketing
efforts capable of influencing consumers into making purchases or behaving against their
best interests. Although some groups may be more susceptible to manipulation than others,
this may also raise issues over the likelihood of discrimination (Kosinski et al., 2013).
Combining the efficacy of AI-powered marketing techniques with the protection of
consumer autonomy and well-being is problematic when attempting to manipulate
customer behaviour. In addition, because the line between persuasion and coercion may be
narrow, differentiating between acceptable and immoral manipulation can be challenging
(Cialdini, 2009). A further worry is ensuring that AI-powered marketing techniques do not
target or abuse vulnerable customers, such as children and people with mental illness.
Businesses may apply ethical standards to AI-powered marketing strategies to address
ethical concerns around manipulating consumer behaviour. These principles emphasise
transparency, fairness and responsibility in developing and deploying AI algorithms. In
addition, firms may incorporate ethical design principles, such as ensuring that AI Ethical and
marketing algorithms correspond with consumers’ best interests and values (Friedman and legal challenges
Nissenbaum, 1996).
The use of privacy-enhancing technologies, such as differential privacy, which adds
of AI
noise to data to maintain individual privacy while allowing the acquisition of important
insights, is one way to avoid manipulating susceptible clients (Erlingsson et al., 2019). In
addition, organisations may adopt transparent data collection and use guidelines to ensure
that consumers know how their data is collected and used. Participatory design approaches 129
are an additional tool for ensuring that the values and interests of consumers are considered
when AI-powered marketing strategies are developed and implemented (Bryson et al., 2017).
Businesses may use a hybrid approach combining AI-powered technology with human
engagement to achieve the best of both worlds.
4.1.4 Job displacement. While AI-driven marketing techniques offer unprecedented
efficiency and responsiveness, they pose ethical challenges related to employment loss.
Automating tasks traditionally performed by humans can lead to job displacement and
exacerbate economic inequality (Brynjolfsson and Mitchell, 2017). For instance, the advent
of AI-powered chatbots and virtual assistants has indeed streamlined customer service but
has also led to job losses in these sectors (Verheyen et al., 2021).
However, it is crucial to note that these AI systems are not flawless. Virtual assistants,
for example, may struggle with understanding accents or dialects, leading to customer
dissatisfaction (Luger and Sellen, 2016). This limitation suggests that AI cannot entirely
replace the human element in customer service, as humans offer the nuance and empathy
that machines lack.
To address the issue of job displacement, businesses can invest in reskilling and
upskilling programs. These initiatives help employees acquire new, relevant skills that align
with the demands of a rapidly evolving labour market (Chui et al., 2018). Such programs can
facilitate internal transitions to new roles or help employees find new career opportunities
externally (Verheyen et al., 2021).
Moreover, companies can adopt a hybrid approach that combines AI’s efficiency with
human workers’ emotional intelligence. This strategy allows businesses to maintain a
human touch in customer service, offering a more balanced and ethical approach to AI
adoption in marketing (Davenport and Ronanki, 2018).
4.1.5 Absence of social interaction Lack of human connection is one of the ethical
problems associated with using AI in marketing. AI-powered chatbots and virtual
assistants may deliver speedy and efficient customer care, but they lack the personal touch
and empathy human connection offers. This raises issues over the influence on customer
satisfaction and the possibility of consumers feeling undervalued and mistreated.
Using AI-powered chatbots and virtual assistants may harm customer satisfaction
because consumers may feel that their requirements are not being served adequately
without the personal touch and empathy of human connection. Also, clients may feel their
problems are not being taken seriously if a computer treats them instead of someone else.
This absence of human connection may result in a loss of trust and brand loyalty towards
the organisation.
To address the argument that AI-powered marketing lacks human connection,
businesses may adopt a hybrid strategy that blends AI-powered technologies with human
engagement to deliver the best of both worlds. This may entail using AI-powered chatbots
and virtual assistants to handle regular enquiries and basic customer support chores
while ensuring clients can connect with a human agent for more complicated or sensitive
concerns. In addition, businesses may invest in educating their human agents to interact
JICES with AI-powered products and provide a more customised and compassionate client
22,1 experience.
4.1.6 Openness and explainability. AI algorithms’ lack of openness and explainability is
an additional ethical problem associated with using AI in marketing. Customers may be
unaware that AI is used to target them with advertisements, and they may not comprehend
how the algorithms operate or why they are being targeted (Spiekermann, 2015). This lack
130 of openness raises issues about consumer autonomy and the possibility of manipulating
customers without their knowledge or agreement.
The opaque nature of AI algorithms makes it difficult for marketers to articulate their
decision-making processes. This lack of explicability might undermine confidence in AI and
deter its adoption. In addition, it may be difficult to discover and remedy flaws or biases in
AI systems without knowing how they function (Zhang et al., 2018).
To address this problem, businesses may create clear data collection and use rules and
ensure customers are fully aware of how their data is used to target them with
advertisements (OECD, 2019). Companies may also use explainable AI approaches, such as
decision trees, to make AI algorithms more visible and comprehensible to customers
(Gunning, 2017). In addition, businesses may include human supervision in AI decision-
making processes to verify that AI algorithms are making ethical and impartial conclusions
(European Commission, 2021).
For example, the General Data Protection Regulation (GDPR) compels businesses to give
customers clear and straightforward information on the data acquired, how it will be used
and with whom it will be shared (GDPR, 2018). Similarly, the California Consumer Privacy
Act (CCPA) compels organisations to declare what data is being collected, with whom it is
being shared and why it is being gathered (CCPA, 2021).
Furthermore, the European Commission’s High-Level Expert Group on AI (2021)
suggested that AI algorithms be visible, explicable and responsible. In addition, they urge
that businesses implement ethical standards for AI research and deployment, which
highlight the need for openness and responsibility in AI decision-making processes.
4.1.7 Cybersecurity. The application of AI in marketing introduces unique cybersecurity
challenges. Although it is true that all digital platforms are vulnerable to cyberattacks, AI
algorithms have specific vulnerabilities, such as susceptibility to data manipulation, which
can lead to flawed or harmful decisions (Renaud et al., 2023).
The extensive data collection and processing required for AI in marketing expand the
attack surface, making it an attractive target for cybercriminals (Taddeo et al., 2019). The
absence of AI-specific cybersecurity norms further complicates the issue (Aloqaily et al.,
2022).
To mitigate these risks, companies can adopt a multi-layered cybersecurity approach.
This may include encryption, multi-factor authentication and regular vulnerability
assessments (Kshetri, 2021). Best data storage and management practices can further
protect consumer data (Dash et al., 2022). Developing and adopting industry-specific
cybersecurity standards can offer a framework for compliance and consumer data
protection (Raban and Hauptman, 2018).
4.1.8 Influence on culture and society. The use of AI in marketing has the potential to
significantly alter society and culture, presenting ethical considerations that must be
addressed. Among other difficulties, marketing methods driven by AI may propagate
prejudices, promote cultural norms and erode faith in institutions. In this part, we will
analyse the ethical considerations associated with the influence of AI in marketing on
society and culture, as well as possible solutions to these difficulties.
The possibility of propagating stereotypes and supporting cultural norms is one of the Ethical and
key ethical issues regarding the effect of AI in marketing on society and culture. AI systems legal challenges
may be taught on skewed data replicating existing social prejudices, leading to marketing
strategies that perpetuate these biases. The research discovered, for instance, that a popular
of AI
job search engine marketed higher-paying positions to males more often than to women,
based on examining the job descriptions shown to users (Datta et al., 2014). Such prejudices
may exacerbate the marginalisation and exclusion of disadvantaged populations, resulting
in detrimental social results. To address this worry, businesses might create ethical 131
standards for AI-powered marketing strategies that promote fairness, diversity and cultural
awareness (European Commission’s High-Level Expert Group on AI, 2021).
The possibility for deep fakes and manipulated media material is a further ethical worry
about the influence of AI in marketing on society and culture. Deepfakes are modified media
material that may be difficult to discern from legitimate video. AI algorithms can be used to
make deep fakes (Ruths and Pfeffer, 2014). This raises concerns over the potential for
disinformation and propaganda to propagate via AI-powered marketing methods, leading to
serious social damage. To address this worry, businesses should develop ethical standards
for AI-powered marketing strategies that highlight the responsible use of AI and guarantee
that their AI usage is aligned with their values and societal duties.
In addition, using AI in marketing might severely affect society and culture because of
unforeseen outcomes. For instance, AI algorithms may yield unanticipated consequences or
interact with other systems in unanticipated ways, leading to potentially undesirable
repercussions. To address this risk, businesses might implement constant monitoring and
review of AI-powered marketing initiatives to discover and reduce unexpected
repercussions as they occur.
The use of AI in marketing may have larger societal and cultural consequences, such as
the possibility of economic inequality and the replacement of human jobs. Particularly in
areas such as customer service and sales, AI-powered marketing methods might lead to
employment loss and economic inequality (Brynjolfsson and Mitchell, 2017). To solve this
issue, businesses may implement regulations that guarantee the advantages of AI-powered
marketing methods are shared equitably with workers and society at large (Klinova and
Korinek, 2021).
Concerns have been raised over data ownership and management because of AI’s usage
in marketing. AI algorithms need vast data, and third-party data providers often hold this
data (Wedel and Kannan, 2016). This raises worries about firms benefiting from customer
data without their knowledge or permission. Moreover, customer distrust and anger may
result from a lack of data use and ownership openness and clarity (OECD, 2019). This part
addresses the ethical implications of data ownership and control in AI-powered marketing
and the accompanying issues and possible solutions.
The potential for firms to abuse and profit from customer data without their permission
or knowledge is one of the key ethical problems associated with data ownership and
management. Using consumer data to generate customised advertising campaigns might
raise privacy issues and the possibility of customers being deceived without their
knowledge. In addition, third-party data ownership by third-party providers might make
customers unaware of who is using their information and for what purpose. This lack of
openness and control may undermine customer confidence in firms and their willingness to
provide personal information (Wedel and Kannan, 2016).
The possibility of data breaches and cyberattacks is an additional difficulty associated
with data ownership and management. Businesses are responsible for gathering and
maintaining customer data to safeguard against illegal access and abuse. Nonetheless, the
JICES rising complexity of cyberattacks and the susceptibility of AI algorithms might put
22,1 customer data in danger. This may result in financial losses, reputational harm, loss of
customer confidence and legal action against the firm.
Companies may create transparent data collecting and use policies to address the ethical
issues associated with data ownership and control. Businesses may offer clear and simple
information to customers on how and by whom their data is used. In addition, businesses
132 should guarantee that customers have control over their data by allowing them to opt out of
data collecting and erase it upon request. Nevertheless, businesses may embrace privacy-
enhancing technology, such as differential privacy, which adds noise to data to safeguard
individual privacy while allowing for the gleaning of relevant insights (Erlingsson et al.,
2019).
Using blockchain technology is an additional possible solution to the problems
associated with data ownership and management. Blockchain technology may offer a
transparent and secure method for managing data ownership and use. Blockchain-based
solutions may enable users to maintain ownership of their data and regulate who gets access
to it (Zheng et al., 2017). This may promote customer openness and trust, creating a greater
readiness to share personal information with businesses. Furthermore, blockchain-based
solutions may offer a safe method for storing and exchanging data, lowering the danger of
data breaches and assaults.
4.1.9 Unintended repercussions. Unintended consequences relate to the unanticipated
and unintentional outcomes of marketing efforts driven by AI. Despite the potential
advantages of AI, it is not always feasible to predict all the results of marketing algorithms.
These unforeseen effects might cause damage to customers and are a major ethical problem
regarding the use of AI in marketing. Forecasting the future is one of the most significant
obstacles associated with unintended effects. The complexity and unpredictability of
AI-powered marketing techniques make it impossible to foresee all possible results. Thus,
businesses may unknowingly damage customers or society as a whole. To address this risk,
businesses might adopt a proactive strategy to detect and minimise the unexpected effects
of AI-powered marketing techniques. One possible approach would be continuously monitoring
and reviewing AI algorithms to discover unanticipated results. This may entail routine testing
and analysis of AI algorithms to verify that they perform as intended and to uncover unforeseen
outcomes. In addition to establishing methods for customers to offer feedback and report
unexpected effects, businesses may also build feedback and reporting processes for unintended
outcomes. In addition, businesses might establish ethical rules highlighting the need for
openness and responsibility in creating and implementing AI-powered marketing initiatives.
This may involve the creation of clear accountability lines for the development and deployment
of AI algorithms, as well as ways for customers to file complaints or express concerns (Floridi
et al., 2021). Companies may aid in identifying and addressing unintended repercussions of AI-
powered marketing initiatives by providing explicit responsibility and transparency.
4.1.10 Responsibility and obligation. Using AI in marketing presents several
responsibilities and accountability-related ethical problems. The difficulty of attributing
blame when anything goes wrong is one of the key problems. Complex and difficult-to-
understand AI algorithms make it tough to determine who is accountable for any resulting
problems (Floridi et al., 2021). Furthermore, using AI in marketing might generate new
ethical concerns that current laws or regulations may not address.
To address these issues, businesses should create ethical standards and best practices
highlighting the need for transparency, responsibility and the appropriate use of AI-
powered marketing methods. For example, the European Commission’s High-Level Expert
Group on AI (2021) has recommended ethical rules emphasising responsibility and openness
for trustworthy AI. These principles indicate that organisations should be honest about Ethical and
their use of AI in marketing, explain how AI algorithms function and the data they use and legal challenges
take responsibility for the results of their AI-powered marketing efforts.
In addition, businesses may develop systems for monitoring and analysing their AI-powered
of AI
marketing campaigns to verify that they operate as intended and do not cause unexpected
damage. This may include periodic audits of the algorithms and the data used to train them
and continuing monitoring of their results (Floridi et al., 2021).
Finally, businesses should create a culture of responsibility and accountability by 133
developing an ethical work environment and providing staff with training and information
on the responsible use of AI. This might include creating clear lines of authority and
accountability for AI-powered marketing initiatives and ensuring workers are informed of
the possible ethical consequences of their jobs (Klinova and Korinek, 2021).
4.1.11 Environmental impact. The environmental impact of AI in marketing is an
often-overlooked concern that warrants attention. AI algorithms require substantial
computational power, which in turn demands energy. This energy consumption contributes
to carbon emissions and climate change (Henderson et al., 2019). However, it is essential to
understand that not all computers and data centres are created equal regarding their
environmental impact. A recent survey of 95 machine learning models found that carbon
emissions vary significantly depending on the energy sources used, the efficiency of the
hardware and the specific tasks the models perform (Luccioni and Hernandez-Garcia, 2023).
To mitigate the environmental impact, companies can adopt sustainable computing
practices. This could involve using renewable energy sources, optimising computational
resources to minimise energy consumption and using energy-efficient hardware (Taddeo
et al., 2019). Cloud computing offers a more energy-efficient alternative to traditional data
centres. By sharing computational resources, companies can reduce energy consumption
and carbon emissions (Aloqaily et al., 2022).
The concept of Green AI involves developing algorithms that are not only efficient
but also environmentally friendly. This could mean designing algorithms that require
less energy to run or optimising existing hardware to be more energy efficient (Strubell
et al., 2019). Adopting a circular economy approach can further mitigate the
environmental impact. This involves designing easily repairable, recyclable or
biodegradable products, thus reducing waste and optimising resource use (Raban and
Hauptman, 2018).
4.1.12 Use of artificial intelligence for malicious aims. AI in marketing may be exploited
for malicious objectives, such as influencing customer behaviour, distributing
misinformation and disinformation and exploiting weak people. These behaviours pose
ethical problems and need preventative actions. This part analyses the ethical implications
of using AI for destructive goals, the issues it creates and possible remedies.
The potential for propaganda and deception is one of the principal ethical problems
associated with using AI for destructive objectives. AI algorithms can produce deep fakes,
manipulate media material and disseminate false information, which may hurt people and
erode faith in institutions. In the 2016 US presidential election, AI algorithms were used to
disseminate false information and sway public opinion (Ruths and Pfeffer, 2014). To solve
this issue, businesses may develop ethical rules for AI-powered marketing methods,
highlighting the need for openness, accountability and responsible use (European
Commission, 2021).
The possibility of exploiting susceptible humans is a further ethical problem associated
with using AI for malicious reasons. The employment of AI algorithms to target vulnerable
people with harmful or exploitative material, such as gambling or pornography, is possible.
JICES This may negatively affect mental health and well-being and perpetuate detrimental
22,1 cultural norms and attitudes. To address this worry, businesses might create ethical
standards highlighting the need for justice, diversity and cultural sensitivity in creating and
implementing AI-powered marketing tactics.
The malicious usage of AI also raises concerns concerning data privacy and security. AI
systems need vast quantities of data to function, and this data might be susceptible to hacks
134 and theft. This raises concerns over the possibility of data breaches and abuse of personally
identifiable information. To address this risk, businesses may implement stringent
cybersecurity measures to safeguard their AI algorithms and the data they gather and use.
Furthermore, businesses may establish clear data collection and use rules and guarantee
that customers have control over their data by ensuring that consumers own their data (EU
Commission, 2019).
In addition to responsibility and accountability, using AI for bad goals raises important
ethical issues. Complex and difficult-to-comprehend AI algorithms make it tough to assign
blame for a mistake or error (Floridi et al., 2021). In addition, using AI for destructive
objectives might raise new ethical quandaries that current laws or regulations may not
address. To solve this issue, businesses might develop ethical rules that stress transparency,
responsibility and the appropriate use of AI-powered marketing methods.
4.1.13 Emotional effect. Applying AI in marketing can affect customers’ psychological
health. AI systems can study customer behaviour and preferences, design tailored
marketing efforts and influence consumer emotions to optimise revenues. Yet, these
techniques may have detrimental psychological repercussions, including a feeling of
intrusion, discomfort and even addiction.
Lack of customer understanding about the influence of AI on their psychological well-
being is one of the obstacles in resolving this ethical consequence. Customers may be
unaware that AI is being used to control their emotions, and they may be unaware of the
psychological effect of tailored marketing initiatives. Thus, businesses must use AI in
marketing transparently and responsibly.
To solve this difficulty, businesses might implement ethical principles prioritising
openness, justice and responsibility in creating and deploying AI-powered marketing
tactics. For example, the European Commission’s High-Level Expert Group on AI has
recommended recommendations emphasising the significance of human supervision,
transparency and responsibility in using AI (European Commission, 2021).
In addition, businesses might take steps to reduce the negative psychological effects of
AI in marketing. For instance, they may let customers opt out of targeted marketing efforts
or restrict the acquisition of their personal information (OECD, 2019). In addition, businesses
may adopt a hybrid strategy that blends AI-powered technologies with human engagement
to give customers a more personalised experience and empathy.
4.2 Legal consequences of artificial intelligence in marketing
Using AI in marketing creates various legal challenges that must be addressed to maintain
compliance with applicable laws and regulations. In this part, we will outline the primary
legal ramifications of using AI in marketing and provide viable remedies.
4.2.1 Protection of intellectual property. Important legal implications of AI in marketing
include intellectual property rights. Although AI can produce fresh and unique information,
it poses problems with the ownership of intellectual property rights. In addition, using AI in
marketing may result in copyright violations if AI algorithms are used to produce content
that is too close to existing copyrighted material. Companies that do not respect intellectual
property rights may face legal action and monetary fines.
Determining who owns the rights to AI-generated material is one of the most difficult Ethical and
aspects of intellectual property rights in AI marketing. Sometimes, the material may belong legal challenges
to the corporation that created the AI algorithm. In others, it may belong to the persons or
of AI
organisations who supplied the data required to train the algorithm. This may lead to issues
about content ownership and pay for its usage.
To solve these issues, businesses may create explicit rules addressing the ownership of
AI-generated material and ensure that all parties engaged in creating and using AI
135
algorithms know them. In addition, businesses might investigate using licence agreements
and other legal instruments to assign and protect intellectual property rights to AI-
generated material.
Avoiding copyright infringement is a further difficulty associated with intellectual
property rights in AI marketing. Companies that do not respect intellectual property rights
may face legal action and monetary fines because AI algorithms can produce content
identical to currently copyrighted material. Before information is published or disseminated,
businesses may use AI-powered systems to scan and evaluate it for possible copyright
violations.
Moreover, businesses might investigate using creative commons licences, which permit
the use of copyrighted content under specific circumstances. By using these licences,
businesses may avoid infringing on the intellectual property rights of others while still
using AI-generated content in their marketing initiatives.
4.2.2 Privacy and data protection. As AI becomes more popular in marketing, the
collection and use of consumer data will become more vital. Data privacy and protection
regulations control how firms acquire and use customer data and require companies to seek
consumers’ express agreement before using their data. Non-compliance with these
regulations may result in legal and financial penalties.
One of the most challenging aspects of using AI in marketing while adhering to data
privacy rules is ensuring that personal data is acquired and used only for lawful reasons.
This raises worries about the possibility of abuse of personal data, such as selling or
distributing information without users’ agreement (Crawford and Schultz, 2014).
To solve this issue, businesses might implement privacy-enhancing technology, such as
encryption and anonymisation, to safeguard personal information and guarantee that it is
used only for specified reasons. In addition, businesses may design clear data collection and
use policies that educate consumers on how their data is acquired, used and safeguarded.
In 2018, the European Union approved the GDPR, a comprehensive data privacy and
protection regulation. The GDPR requires enterprises to get customers’ express agreement
before using their data and to adopt safeguards against unlawful access, modification, or
disclosure. Similarly, the CCPA became effective in 2020. It gives California individuals the
right to know what personal data is being collected about them and the ability to request
that such data be erased.
4.2.3 Consumer security. The use of AI in marketing has major legal implications for
consumer protection. With AI algorithms gathering and analysing large volumes of
consumer data, consumers must be safeguarded against any possible damage from using
this data. Consumer protection regulations ensure that corporations using AI do not expose
customers to unfair, fraudulent or abusive marketing activities.
Assuring that customers are properly informed about the use of their data is one of the
most serious obstacles to consumer protection using AI in marketing. Often, consumers lack
a clear knowledge of how businesses acquire and use their data, which may result in
confusion and distrust. In addition, AI algorithms may be used to build customised
JICES marketing efforts that prey on vulnerable clients, such as those with addiction or mental
22,1 health difficulties.
To solve these issues, businesses may implement transparent data collection and use
policies that notify customers precisely how their data is gathered and used. In addition,
businesses may be obliged to get customers’ express permission before collecting and using
their data. Consumer protection regulations also force corporations to reveal how their AI
136 algorithms function and how they are used in marketing campaigns.
In addition, businesses may adopt ethical principles, such as those established by the
European Commission’s High-Level Expert Group on AI (2021), highlighting the need for
openness, fairness and responsibility in creating and implementing AI-powered marketing
strategies. These principles also assist in guaranteeing that firms be held responsible for
any damage their AI algorithms create.
4.2.4 Responsibility and liability. The problem of responsibility and accountability is one
of the most significant legal ramifications of using AI in marketing. As AI systems grow
more complicated and autonomous, it might be challenging to define who is liable in the
event of mistakes or damage produced by the AI system. This might lead to legal difficulties
for businesses and people creating and implementing marketing campaigns driven by AI.
The lack of defined legal frameworks and norms for using AI in marketing is one of the
obstacles associated with responsibility and accountability. Current rules and regulations
may not be enough to handle the specific difficulties offered by AI systems, creating
ambiguity and confusion around responsibility and accountability concerns.
In addition, using AI in marketing might generate ethical quandaries that current laws or
regulations may not address. In certain areas, using AI to target weak or sensitive persons
with damaging or exploitative information may be deemed unethical but not criminal.
To address the issue of responsibility and accountability in using AI in marketing, it is
crucial to build clear legal frameworks and rules that meet the particular issues offered by
AI systems. This involves outlining developers, marketers and end-user roles and
responsibilities in creating and implementing AI-powered marketing initiatives.
In addition, businesses may embrace ethical principles and best practices for using AI in
marketing, such as those established by the European Commission’s High-Level Expert
Group on AI (2021). These principles underline the need for openness, justice and
responsibility in creating and implementing marketing initiatives driven by AI.
In addition, businesses may use risk management techniques, such as insurance plans, to
reduce the possible financial effect of liability concerns emerging from using AI in
marketing. In addition, businesses need to establish clear and transparent data collection
and use policies that tell customers about how their data is gathered and used and to seek
their express permission before using their data.
4.2.5 Brand and trademark protection. The protection of trademarks and brands is
another legal issue of AI in marketing. When AI technologies grow more pervasive in
marketing, it becomes simpler for counterfeiters to produce and sell counterfeit goods. This
presents a substantial threat to the reputation and earnings of established businesses.
Furthermore, algorithms powered by AI may be used to produce phony reviews or
endorsements, which can mislead customers and hurt genuine firms.
Detecting and blocking counterfeit items or bogus endorsements is a trademark and
brand protection issue. Businesses may use AI-enabled solutions to scan online
marketplaces and social media platforms for counterfeit goods and phony endorsements
(Daoud et al., 2020). In addition, businesses may use blockchain technology to build a secure
and unchangeable record of the legitimacy of their goods.
Enforcing intellectual property rights across several countries is another trademark and Ethical and
brand protection obstacle. The rules and regulations regulating trademarks and brand legal challenges
protection vary from country to country, making it challenging for businesses to defend
their intellectual property rights internationally. To solve this difficulty, businesses may
of AI
adopt uniform and effective trademark and brand protection strategies in collaboration with
international organisations and government authorities.
4.2.6 Conflicts of interest and competition law. Antitrust and competition law are
additional legal consequences of AI in marketing. The employment of AI algorithms in 137
marketing may lead to the concentration of market power in the hands of a few dominant
businesses, resulting in anti-competitive behaviour and possible antitrust law breaches.
The possibility for dominant corporations to use AI algorithms to acquire an unfair edge
over weaker rivals is one of the obstacles faced by AI in marketing. For instance,
dominating corporations may use AI to gather and analyse enormous quantities of customer
data, giving them a competitive edge in generating tailored marketing strategies. This may
create entrance obstacles for smaller firms and reduce market competitiveness.
Antitrust and competition authorities might issue guidelines emphasising the need for
openness and accountability in developing and deploying AI-based marketing algorithms to
address these concerns. These recommendations also address the possibility that AI
algorithms promote price-fixing, collusion and other anti-competitive actions (OECD, 2019).
In addition, businesses should create ethical rules for using AI algorithms in marketing
that stress the significance of fair market processes and competition. Companies may pledge
not to use AI algorithms to engage in anti-competitive behaviour or unfairly target rivals
(Klinova and Korinek, 2021).
4.2.7 Agreements and licencing. As AI marketing technologies evolve, the legal
ramifications of contracts and licencing assume greater significance. Contracts and licence
agreements are crucial for controlling the connection between AI providers and clients and
preserving intellectual property rights, such as data ownership and usage. Yet, the unique
qualities of AI marketing technology provide issues for conventional contract and licencing
arrangements, such as uncertainties over responsibility and accountability for algorithmic
decisions and data privacy concerns.
Because of their complexity and unpredictability, it is difficult to establish complete
contracts and licence agreements for AI marketing technology. Identifying the extent of
usage and ownership of the data used by opaque AI systems is often difficult. In addition, as
AI systems learn and adapt, it becomes more difficult to predict their future behaviour,
which may result in unanticipated events and contractual violations.
Another obstacle is determining AI companies’ and consumers’ responsibility and
accountability for algorithmic decision-making. When AI algorithms result in legal or
ethical difficulties, it may be difficult to ascertain who is ultimately accountable for the
decision. This raises questions about the sufficiency of current legal frameworks to hold
vendors and clients liable for the activities of their AI systems.
Establishing new contractual and licencing arrangements intended for AI marketing
tools is one solution to these problems. Using blockchain technology, “smart contracts” may
automate the execution of contractual agreements and give transparency over the usage and
ownership of data, for instance. Incorporating risk-sharing mechanisms into contracts is
another method for addressing liability and accountability issues, such as shared
responsibility for the activities of AI systems.
In addition, the fast growth of AI marketing technology necessitates modernising
regulatory frameworks. Governments and regulatory agencies may play a significant role in
developing legal frameworks that meet the particular difficulties posed by AI marketing
JICES technology. This may involve the creation of explicit norms and laws for the use of AI
22,1 marketing tools and the establishment of monitoring and enforcement systems.
4.2.8 Absence of legal structures. A regulatory vacuum has resulted from the fast use of
AI in marketing that has surpassed the creation of legal frameworks, creating problems for
both businesses and consumers. This gap in legal frameworks is not exclusive to AI; it has
occurred often throughout the history of technology. For instance, it took years to address
138 several legal concerns related to copyright, data privacy and e-commerce that arose with the
introduction of the internet in the 1990s (Lessig, 1999). The emergence of social media
platforms has highlighted similar concerns regarding free speech, data ownership and false
information and are currently being discussed (Gillespie, 2018).
In the lack of clear legal frameworks, an ambiguous environment results, making it
difficult for companies to determine what is lawful and what is not. This lack of
transparency increases the likelihood of unethical behaviour, such as exploiting consumer
data or unfair business practices. In addition, authorities are ill-prepared to supervise and
enforce ethical AI usage in marketing (Taddeo et al., 2019).
Many ideas have been put out to deal with these problems. Creating detailed legislation
pertaining specifically to AI in marketing is one strategy. Examples of precedents for data
protection that might be used in AI applications are those established by the GDPR in the
European Union (Kamarinou et al., 2017).
Another approach is the industry taking the initiative to create moral principles and best
practices. These policies include the values of openness, responsibility and justice and could
be created in collaboration with various stakeholders, including customers, staff members
and regulators (Aloqaily et al., 2022).
5. Implications of the study
5.1 Implications for theory
AI’s ethical and regulatory implications in marketing underscore the need for a more
complete theoretical framework to guide the development and implementation of AI-
powered marketing strategies. There is no agreement on solving the complicated ethical and
legal challenges AI brings to marketing.
The responsible innovation framework is a theoretical paradigm that may be relevant in
this situation (von Schomberg, 2011). Responsible innovation requires a proactive and
comprehensive approach that considers both the technological and economic components
and the social, ethical and legal dimensions. This strategy highlights the need to interact
with stakeholders, predict and minimise possible risks and uncertainties and ensure that
innovation’s benefits and hazards are divided equitably.
Ethical leadership is another significant theoretical paradigm (Brown and Treviño, 2006).
Ethical leadership stresses the significance of leaders demonstrating ethical conduct,
supporting ethical decision-making and fostering a culture of ethical awareness and
accountability. This approach might be used in creating and implementing AI-powered
marketing strategies, highlighting the significance of ethical leadership at all organisational
levels, from senior management to individual practitioners.
Furthermore, the notion of ethical decision-making frameworks (EDMF) may be valuable
in guiding the development and implementation of AI-powered marketing strategies.
EDMFs provide a systematic approach to ethical decision-making by assisting businesses in
identifying and prioritising ethical problems, evaluating possible risks and rewards and
developing methods to minimise ethical dilemmas. In the case of AI in marketing, where the
ethical and legal consequences are complicated and diverse, this strategy might be very
effective.
AI’s ethical and legal implications in marketing underscore the need for a Ethical and
multidisciplinary study that pulls from computer science, psychology, sociology, law and legal challenges
philosophy, among other disciplines. Multidisciplinary research may aid in addressing the
intricate and numerous ethical and legal concerns posed by AI in marketing and contribute
of AI
to creating more complete and effective solutions.
5.2 Implications for practice
Marketing professionals and businesses must consider AI’s ethical and regulatory
139
ramifications in marketing. By understanding the issues and solutions outlined in this
paper, marketing professionals may guarantee that their use of AI is ethical, legal and
consistent with their organisation’s beliefs and obligations.
A practical conclusion is that marketing professionals must emphasise openness and
explainability in using AI. This implies that marketing professionals must guarantee that
customers are fully informed about how their data is gathered and used to target them with
advertisements and that they comprehend the usage of AI algorithms in marketing efforts.
By emphasising openness and explainability, marketing professionals may gain customers’
confidence and prevent the negative effects of a lack of transparency and explainability.
The necessity for marketing professionals to create ethical rules for creating and
implementing AI-powered marketing strategies is another practical issue. By adopting
ethical principles, marketing professionals may guarantee that their use of AI complies with
ethical and legal norms and is compatible with their organisation’s ideals.
To reduce the dangers of using AI in marketing, marketing professionals should invest
in creating and implementing bias-detection tools and privacy-enhancing technology. These
techniques and technologies may aid in identifying and addressing any biases in AI
algorithms while protecting consumer privacy and enabling the gleaning of valuable
insights from consumer data. By investing in these technologies, marketing professionals
may show their commitment to AI’s proper and ethical use in marketing.
6. Limitations
Notwithstanding the interesting insights gathered from this investigation of the ethical and
legal consequences of AI in marketing, it is important to recognise the limits of this research.
Initially, the focus of this study is confined to a review of the most important ethical and
legal issues of AI in marketing. Additional possible repercussions, such as those associated
with intellectual property, contracts and licencing, should be investigated in more depth in
future studies; even though this article gives various answers and best practices for tackling
the stated ethical and legal concerns, the viability and efficacy of these solutions may differ
depending on the context and industry. Thus, more research and case studies are required to
evaluate the applicability and efficacy of these solutions in other circumstances. This
research is mostly based on a literature review and may not represent the experiences or
opinions of all stakeholders engaged in AI-powered marketing.
Further study might involve interviews or surveys with marketing professionals,
customers and other key stakeholders to understand the practical difficulties and solutions
better. Because of the rapid speed of technical progress, AI’s ethical and regulatory
ramifications in marketing are continually increasing. Consequently, this work is a
springboard for more research and continuing conversations on this subject.
7. Future directions
The implications of AI in marketing are complex and multifaceted, and there is much to be
explored in the future. This includes researching the biases and discriminatory outcomes
JICES that AI algorithms can produce in marketing and developing effective tools to mitigate these
22,1 biases. New legal frameworks, ethical guidelines and codes of conduct can help ensure the
responsible use of AI in marketing. Innovative consumer privacy and data protection
approaches, like blockchain technology, also deserve exploration. New methods for
evaluating the effectiveness of AI-powered marketing, including experiments and
randomised controlled trials, must be developed. Balancing the benefits of AI with its
140 potential risks and negative consequences requires hybrid approaches that combine AI with
human interaction and oversight. Promoting transparency and explainability in AI
algorithms using natural language processing and visualisation tools can make them more
accessible to non-experts. AI’s wider societal and cultural impacts in marketing must be
investigated to avoid perpetuating social inequalities or reinforcing cultural norms. Finally,
promoting the responsible and ethical use of AI in marketing requires developing training
programs and certification processes for professionals working in this field.
8. Conclusion
The paper explores the ethical and legal challenges of integrating AI in marketing. It
scrutinises 13 ethical issues, such as data privacy, consumer security, algorithmic biases
and exploiting vulnerable populations. The paper navigates through eight legal challenges
such as intellectual property rights, data protection regulations like GDPR and CCPA,
consumer protection laws, responsibility and liability, brand and trademark protection,
conflicts of interest and competition law, agreements and licencing and the absence of
specific legal frameworks for AI in marketing. We advocate for a multidisciplinary
approach to tackle these challenges, emphasising the need for openness, transparency and
ethical guidelines in AI-powered marketing strategies. To mitigate these challenges, we also
suggest practical solutions like bias-detection tools, privacy-enhancing technologies and
clear contractual agreements. The paper serves as a seminal guide for academics and
practitioners, offering a detailed overview of the existing challenges and proposing avenues
for future research and practical action.
References
Aloqaily, M., Kanhere, S., Bellavista, P. and Nogueira, M. (2022), “Special issue on cybersecurity
management in the era of AI”, Journal of Network and Systems Management, Vol. 30 No. 3, p. 39.
Arner, D.W., Barberis, J. and Buckley, R.P. (2016), “The evolution of Fintech: a new post-crisis
paradigm?”, Georgetown Journal of International Law, Vol. 47 No. 4, pp. 1271-1319.
Borenstein, J. and Howard, A. (2021), “Emerging challenges in AI and the need for AI ethics education”,
AI and Ethics, Vol. 1 No. 1, pp. 61-65.
Brown, M.E. and Treviño, L.K. (2006), “Ethical leadership: a review and future directions”,
The Leadership Quarterly, Vol. 17 No. 6, pp. 595-616, doi: 10.1016/[Link].2006.10.004 von.
Brynjolfsson, E. and Mitchell, T. (2017), “What can machine learning do? Workforce implications”,
Science, Vol. 358 No. 6370, pp. 1530-1534.
Bryson, J., Diamantis, M.E. and Grant, T.D. (2017), “Of, for, and by the people: the legal lacuna of
synthetic personas, AI and Society”, Artificial Intelligence and Law, Vol. 25 No. 3, pp. 273-282,
doi: 10.1007/s00146-016-0667-3.
Buolamwini, J. and Gebru, T. (2018), “Gender shades: intersectional accuracy disparities in commercial
gender classification”, Proceedings of the 1st Conference on Fairness, Accountability and
Transparency, pp. 77-91.
California Consumer Privacy Act of 2021 (2021), available at: [Link]
Chui, M., Manyika, J. and Miremadi, M. (2018), “Where machines could replace humans – and where Ethical and
they can’t (yet)”, McKinsey Quarterly, Vol. 1, pp. 1-9.
legal challenges
Cialdini, R.B. (2009), Influence: The Psychology of Persuasion, Harper Business, New York, NY; Collins,
MS, Vol. 55, p. 339.
of AI
Crawford, K. and Schultz, J. (2014), “Big data and due process: toward a framework to redress
predictive privacy harms”, Boston College Law Review, Vol. 55 No. 1, pp. 93-128.
Currie, G. and Hawk, K.E. (2021), “Ethical and legal challenges of artificial intelligence in nuclear 141
medicine”, Seminars in Nuclear Medicine, Vol. 51 No. 2, pp. 120-125.
Danks, D. and London, A.J. (2017), “Algorithmic bias in autonomous systems”, In Ijcai, Vol. 17 No. 2017,
pp. 4691-4697.
Daoud, E., Vu, D., Nguyen, H. and Gaedke, M. (2020), “Improving fake product detection using ai-based
technology”, In Proceedings of the 18th International Conference on E-Society (ES 2020).
Dash, B., Sharma, P. and Ali, A. (2022), “Federated learning for privacy-preserving: a review of PII data
analysis in Fintech”, International Journal of Software Engineering and Applications, Vol. 13 No. 4.
Datta, A., Tschantz, M. C. and Datta, A. (2014), “Automated experiments on ad privacy settings: a tale
of opacity, choice, and discrimination”, arXiv preprint arXiv:1408.6491.
Davenport, T.H. and Ronanki, R. (2018), “Artificial intelligence for the real world”, Harvard Business
Review, Vol. 96 No. 1, pp. 108-116.
Eriksson, K., Kerem, K. and Nilsson, D. (2019), “Artificial intelligence in marketing: a general overview
and future research directions”, Journal of Business Research, Vol. 98, pp. 365-380, doi: 10.1016/j.
jbusres.2019.01.014.
Erlingsson, Ú., Pihur, V. and Korolova, A. (2019), “RAPPOR: randomized aggregatable privacy-
preserving ordinal response”, ACM Transactions on Privacy and Security, Vol. 22 No. 1, pp. 1-36.
EU Commission (2019), “High-level expert group on artificial intelligence”, available at: [Link]
[Link]/digital-single-market/en/high-level-expert-group-artificial-intelligence
European Commission (2021), White Paper on Artificial Intelligence: A European Approach to Excellence
and Trust, European Commission, Brussels.
European Commission’s High-Level Expert Group on Artificial Intelligence (2021), “Ethics guidelines
for trustworthy AI. European commission”, available at: [Link]
[Link]
European Parliament (2016), “Regulation (EU) 2016/679 of the European Parliament and of the Council
of 27 April 2016 on the protection of natural persons with regard to the processing of personal
data and on the free movement of such data, and repealing Directive 95/46/EC (General Data
Protection Regulation) (Text with EEA relevance)”, Publications Office of the EU, available at:
[Link]
language-en and [Link]
ba9a-01aa75ed71a1/language-en
Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., . . . Vayena, E. (2021), “An
ethical framework for a good AI society: opportunities, risks, principles, and recommendations”,
Ethics, Governance, and Policies in Artificial Intelligence, Springer, pp. 19-39.
Friedman, B. and Nissenbaum, H. (1996), “Bias in computer systems”, ACM Transactions on
Information Systems, Vol. 14 No. 3, pp. 330-347, doi: 10.1145/230538.230561.
Gebru, T., Morgenstern, J., Vecchione, B., Vaughan, J.W., Wallach, H., Iii, H.D. and Crawford, K. (2021),
“Datasheets for datasets”, Communications of the ACM, Vol. 64 No. 12, pp. 86-92.
General Data Protection Regulation (GDPR) 2016/679 (2018), available at: [Link]
reg/2016/679/oj
Gerke, S., Minssen, T. and Cohen, G. (2020), “Ethical and legal challenges of artificial intelligence-driven
healthcare”, In Artificial Intelligence in Healthcare, Academic Press, Cambridge, pp. 295-336.
JICES Gillespie, T. (2018), Custodians of the Internet: Platforms, Content Moderation, and the Hidden
Decisions That Shape Social Media, Yale University Press, Connecticut.
22,1
Gunning, D. (2017), “Explainable artificial intelligence (XAI). Defense advanced research projects
agency (DARPA),” nd Web, Vol. 2 No. 2, p. 1.
Hagendorff, T. (2020), “The ethics of AI ethics: an evaluation of guidelines”, Minds and Machines,
Vol. 30 No. 1, pp. 99-120.
142 Hajian, S., Bonchi, F. and Castillo, C. (2016), “Algorithmic bias: from discrimination discovery to
fairness-aware data mining”, In Proceedings of the 22nd ACM SIGKDD International
Conference on Knowledge Discovery and Data Mining, pp. 2125-2126.
Helveston, M.N. (2015), “Consumer protection in the age of big data”, Wash. UL Rev, Vol. 93, p. 859.
Henderson, P., Islam, R., Bachman, P., Pineau, J., Precup, D. and Meger, D. (2019), “Deep reinforcement
learning that matters”, Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 32
No. 1, pp. 3204-3211, doi: 10.1609/aaai.v33i01.33013204.
Jain, A. (2021), “Intellectual property rights in the age of artificial intelligence”, Issue 2 Int’l JL Mgmt.
and Human, Vol. 1501, p. 4.
Jin, G.Z. (2018), “Artificial intelligence and consumer privacy”, The Economics of Artificial Intelligence:
An Agenda, University of Chicago Press, Chicago, pp. 439-462.
Jobin, A., Ienca, M. and Vayena, E. (2019), “The global landscape of AI ethics guidelines”, Nature
Machine Intelligence, Vol. 1 No. 9, pp. 389-399.
Kamarinou, D., Millard, C., Singh, J. and Leenes, R. (2017), “Machine learning with personal data”, In
Data Protection and Privacy: The Age of Intelligent Machines, Hart Publishing, Oxford.
Klinova, K. and Korinek, A. (2021), “Ai and shared prosperity”, In Proceedings of the 2021 AAAI/ACM
Conference on AI, Ethics, and Society, pp. 645-651.
Kosinski, M., Stillwell, D. and Graepel, T. (2013), “Private traits and attributes are predictable from
digital records of human behaviour”, Proceedings of the National Academy of Sciences, Vol. 110
No. 15, pp. 5802-5805.
Kshetri, N. (2021), “Economics of artificial intelligence in cybersecurity”, IT Professional, Vol. 23 No. 5,
pp. 73-77.
Lessig, L. (1999), Code and Other Laws of Cyberspace, Basic Books, Newyork City.
Li, X. and Karahanna, E. (2015), “Online recommendation systems in a B2C e-commerce context: a
review and future directions”, Journal of the Association for Information Systems, Vol. 16 No. 2,
pp. 72-107.
Luccioni, A.S. and Hernandez-Garcia, A. (2023), “Counting carbon: a survey of factors influencing the
emissions of machine learning”, arXiv preprint arXiv:2302.08476.
Luger, E. and Sellen, A. (2016), “Like having a really bad PA: the Gulf between user expectation and
experience of conversational agents”, ACM Transactions on Computer-Human Interaction
(TOCHI), Vol. 23 No. 1, pp. 1-28.
Mittelstadt, B.D., Allo, P., Taddeo, M., Wachter, S. and Floridi, L. (2016), “The ethics of algorithms:
mapping the debate”, Big Data and Society, Vol. 3 No. 2, p. 2053951716679679.
Mittelstadt, B., Russell, C. and Wachter, S. (2019), “Explaining explanations in AI”, Proceedings of the
Conference on Fairness, Accountability, and Transparency, pp. 279-288.
OECD (2019), Guidelines on the Protection of Privacy and Transborder Flows of Personal Data, OECD
Publishing.
Poole, D., Mackworth, A. and Goebel, R. (1998), Computational Intelligence: A Logical Approach, Oxford
University Press, Oxford.
Raban, Y. and Hauptman, A. (2018), “Foresight of cyber security threat drivers and affecting
technologies”, foresight, Vol. 20 No. 4, pp. 353-363, doi: 10.1108/FS-02-2018-0020.
Regulation, G.D.P. (2023), “General data protection regulation (GDPR)-official legal text (2023)”. Ethical and
Renaud, K., Warkentin, M. and Westerman, G. (2023), “From ChatGPT to HackGPT: meeting the legal challenges
cybersecurity threat of generative AI”, MIT Sloan Management Review.
of AI
Russell, S.J. and Norvig, P. (2010), Artificial Intelligence: A Modern Approach, Pearson, London.
Ruths, D. and Pfeffer, J. (2014), “Social media for large studies of behavior”, Science, Vol. 346 No. 6213,
pp. 1063-1064, doi: 10.1126/science.346.6213.1063.
Schomberg, R. (2011), “Prospects for technology assessment in a framework of responsible research 143
and innovation”, in Dusseldorp, M. and Beecroft, R. (Eds), Technikfolgen Abschätzen Lehren:
Bildungspotenziale Transdisziplinärer Methoden, Springer VS, Berlin, pp. 39-61, doi: 10.1007/978-
3-531-93468-6_2.
Spiekermann, S. (2015), Ethical IT Innovation: A Value-Based System Design Approach, CRC Press,
Florida.
Staicu, C.-A., Pradel, M. and Livshits, B. (2016), Understanding and Automatically Preventing Injection
Attacks on [Link] (Technical Report TUD-CS-2016-14663), TU Darmstadt, Department of
Computer Science.
Strubell, E., Ganesh, A. and McCallum, A. (2019), “Energy and policy considerations for deep learning
in NLP”, Proceedings of the 57th Annual Meeting of the Association for Computational
Linguistics, pp. 3645-3650, doi: 10.18653/v1/P19-1360.
Taddeo, M., McCutcheon, T. and Floridi, L. (2019), “Trusting artificial intelligence in cybersecurity is a
double-edged sword”, Nature Machine Intelligence, Vol. 1 No. 12, pp. 557-560.
Topol, E. (2019), Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again, Basic Books,
Newyork City.
Van Ooijen, I. and Vrabec, H.U. (2019), “Does the GDPR enhance consumers’ control over personal
data? An analysis from a behavioural perspective”, Journal of Consumer Policy, Vol. 42 No. 1,
pp. 91-107.
Verheyen, S., Kovarík, O. and. Committee On Culture and Education. (2021), REPORT on artificial
intelligence in education, culture and the audiovisual sector (2020/2017(INI)).
Wang, S., Chen, Z., Xiao, Y. and Lin, C. (2021), “Consumer privacy protection with the growth of
AI-empowered online shopping based on the evolutionary game model”, Frontiers in Public
Health, Vol. 9, p. 705777.
Wachter, S., Mittelstadt, B. and Russell, C. (2017), “Counterfactual explanations without opening the
black box: automated decisions and the GDPR”, Harv. JL and Tech, Vol. 31, p. 841.
Wedel, M. and Kannan, P.K. (2016), “Marketing analytics for data-rich environments”, Journal of
Marketing, Vol. 80 No. 6, pp. 97-121.
Wirtz, J., Patterson, P., Kunz, W., Gruber, T., Lu, V.N., Paluch, S. and Martins, A. (2023), “Corporate
digital responsibility: dealing with ethical, privacy and fairness challenges of AI”, Journal of
Business Research.
Zarsky, T. (2016), “The trouble with algorithmic decisions: an analytic road map to examine efficiency
and fairness in automated and opaque decision making”, Science, Technology, and Human
Values, Vol. 41 No. 1, pp. 118-132.
Zhang, B.H., Lemoine, B. and Mitchell, M. (2018), “Mitigating unwanted biases with adversarial
learning”, In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society,
pp. 335-340.
Zhao, Z., Dua, D. and Singh, S. (2017), “Generating natural adversarial examples”, arXiv preprint
arXiv:1710.11342.
Zheng, Z., Xie, S., Dai, H., Chen, X. and Wang, H. (2017), “An overview of blockchain technology:
architecture, consensus, and future trends”, 2017 IEEE International Congress on big data
(BigData congress), IEEE, pp. 557-564.
JICES Further reading
22,1 Bolukbasi, T., Chang, K.W., Zou, J.Y., Saligrama, V. and Kalai, A.T. (2016), “Man is to computer
programmer as woman is to homemaker? Debiasing word embeddings”, In Advances in Neural
Information Processing Systems, pp. 4349-4357.
Caliskan, A., Bryson, J.J. and Narayanan, A. (2017), “Semantics derived automatically from language
corpora contain human-like biases”, Science, Vol. 356 No. 6334, pp. 183-186, doi: 10.1126/science.
aal4230.
144
Cartolovni, A., Tomicic, A. and Mosler, E.L. (2022), “Ethical, legal, and social considerations of AI-
based medical decision-support tools: a scoping review”, International Journal of Medical
Informatics, Vol. 161, p. 104738.
Gordon, J.S. (2021), “AI and law: ethical, legal, and socio-political implications”, AI and Society, Vol. 36
No. 2, pp. 403-404.
Gorwa, R., Binns, R. and Katzenbach, C. (2020), “Algorithmic content moderation: technical and
political challenges in the automation of platform governance”, Big Data and Society, Vol. 7
No. 1, p. 2053951719897945.
Kleinberg, J., Lakkaraju, H., Leskovec, J., Ludwig, J. and Mullainathan, S. (2018), “Human decisions and
machine predictions”, The Quarterly Journal of Economics, Vol. 133 No. 1, pp. 237-293, doi:
10.1093/qje/qjx032.
Liu, Z., Luo, P., Wang, X. and Tang, X. (2020), “Deep learning face attributes in the wild”, Proceedings of
the IEEE International Conference on Computer Vision, pp. 3730-3738. doi: 10.1109/ICCV.2015.425.
Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., . . . Stoyanov, V. (2019), “RoBERTa: a robustly
optimised BERT pretraining approach”, arXiv preprint arXiv:1907.11692.
Raji, I.D., Smart, A., White, R.N., Mitchell, M., Gebru, T., Hutchinson, B., . . . Barnes, P. (2020), “Closing
the AI accountability gap: defining an end-to-end framework for internal algorithmic auditing”,
Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pp. 33-44.
Russell, S.J. (2010), “Artificial intelligence a modern approach”, Pearson Education.
About the authors
Dr Dinesh Kumar is an accomplished professional with extensive military, academic, business and
charity expertise. He holds a PhD from the esteemed Indian Institute of Technology (IIT) Roorkee. He
has received admission offers for the Fellow Program in Management from the Indian Institute of
Management (IIM) Ranchi, making him the first Indian soldier to secure doctoral admission offers
from both IIT and IIM. He founded [Link], and Mission Dost-E-Jahan. He is a faculty member at
Lovely Professional University’s Mittal School of Business, teaching courses related to
Organizational Behavior and Human Resource Management. Dinesh Kumar is the corresponding
author and can be contacted at: dineshairwarrior@[Link]
Dr Nidhi Suthar holds a PhD from MPUAT, Rajasthan. She has worked for more than decade in
academia. Currently, she is working as the Chief Executive Officer of Pomento IT Services.
For instructions on how to order reprints of this article, please visit our website:
[Link]/licensing/[Link]
Or contact us for further details: permissions@[Link]