0% found this document useful (0 votes)
100 views198 pages

AI For Marketing

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
100 views198 pages

AI For Marketing

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 198

AI for Marketing

Julien Cloarec
Full Professor of Quantitative Marketing

/juliencloarec
Bio
• Academic Roles and Background
• Full Professor of Quantitative Marketing at iaelyon School of Management, Université
Jean Moulin Lyon 3 (2023-…)
• Previous roles
• Assistant Professor (2021-2022)
• Associate Professor (2022-2023)
• Educational background
• M.Eng. in Computer Science (2015)
• PhD in Management Science (2019)
• HDR in Management Science (2023)
• Leadership and Program Development
• Founder and coordinator of the "AI for Marketing" executive program
• Leads the “Responsible AI in Marketing” consortium with Sugi Research and Dentsu Insights
• Research and Supervision
• Supervises four PhD students focused on AI-related topics
• Published 11 articles, 1 book, 5 book chapters; presented 37 conference papers and
delivered over 50 seminars since 2020
Bio
• Collaborations and Industry Engagement
• Visiting researcher at York University, studying social machine learning (Cloarec &
Giesler, 2024)
• Collaborates with regulatory bodies (e.g., CNIL), professional organizations
(e.g., AFCDP), and international academic institutions (e.g., American
Marketing Association)
• Key Projects and Grants
• Founding member of the "Citizen Trust in AI Innovations" project (2023-2027),
funded by the Dieter Schwarz Foundation (€1.5M)
• Received a Bourgeons grant (€9,500) for research on AI performance and privacy
trade-offs
• Editorial and Thought Leadership
• Guest editor for a special issue of Décisions Marketing on AI
Course Objectives
•Understand the foundational concepts of AI
•Apply practical skills in AI development and usage
•Analyze and interpret AI-generated outputs and model behaviors
•Evaluate the ethical implications, regulatory challenges,
and responsible practices in AI deployment
•Explore emerging trends and future opportunities in AI
Course Structure
•Introduction to AI
•Large Language Models
• Understanding Large Language Models
• Using Large Language Models
• Training and Fine-Tuning Large Language Models
•The Future of AI
Evaluation Criteria
• Individual Grade (60%)
• 50 questions (random order) in 15 minutes
• 3 answers (random order), 1 correct, no penalty, no backtracking, all questions mandatory
• Focus
• Day 1 course content (40 questions)
• Mandatory reading (10 questions)
• AI Ethics certification upload to Drive by November 30th
• Group Grade (40%)
• Written report (20%)
• Evaluate an existing AI solution (SugiScope)
• Oral defense (20%)
• Presentation and defense of testing/benchmarking recommendations
• Evaluation based on clarity, insights, and reasoning
Outside the
Classroom Mandatory
Reading
Outside the
Classroom
Certification – Ethics
of AI
ethics-of-ai.mooc.fi
Outside the Classroom
Meta.Moprh.Ose – CPME +
iaelyon
Outside the Classroom
Introduction to AI
Foundations of AI
History
Foundations of AI
History
Foundations of AI
Basic concepts – AI vs. ML vs. DL
IA : truc intelligent
Le machine learning : IA
qui peut apprendre par
elle-même
Deep learning : sous
partie de machine
learning qui utilise le
réseau de neurone
Foundations of AI
Basic concepts – AI vs. ML vs. DL
Foundations of AI
Human Annotation

Apprentissage par
renforcement : quand
l’humain dis a chatgpt
c’est pas bien fais ci
fais ca
Foundations of AI
Human Annotation
Foundations of AI
Applications in Everyday Life and
Industry
Foundations of AI
Applications in Everyday Life and
Industry
Foundations of AI
Applications in Everyday Life and
Industry
Exploring Generative
AI Generative AI
Landscape
Exploring Generative
AI Open vs. Closed
Models
Exploring Generative
AI Open vs. Closed
Models
Learning AI Through
Practice Tools and
Platforms – LM Studio
Learning AI Through Practice
Tools and Platforms – Python for AI
Exploring Generative AI
Tools and Platforms – Hugging Face
Learning AI Through
Practice Interactive AI
Playgrounds
Learning AI Through
Practice Interactive AI
Playgrounds
Learning AI Through
Practice Interactive AI
Playgrounds
Learning AI Through
Practice Hands-On
Chatbots
AI Regulation
Regulatory Overview – Understanding the
AI Act
AI Regulation
Regulatory Overview – Lyon 3
AI Regulation
Case Study – The Air Canada Chatbot
Understanding
Large Language
Models
Core Concepts of LLMs
Reinforcement Learning with Human
Feedback
Core Concepts of LLMs
Reinforcement Learning with Human
Feedback
Core Concepts of LLMs
Reinforcement Learning with Human
Feedback
Core Concepts of LLMs
Tokenization – Breaking Down Text into
Tokens
Core Concepts of LLMs
Tokenization – Breaking Down Text into
Tokens
Core Concepts of LLMs
Context Windows – Understanding Model
Memory
“If one examines the words in a book, one at a
time as through an opaque mask with a hole in
it one word wide, then it is obviously impossible
to determine, one at a time, the meaning of the
words. […]

But if one lengthens the slit in the opaque mask,


until one can see not only the central word in Tokenization : comment la machine va decouper en unites
question, but also say N words on either side, les données dentrainemet et essayer de faire sens de tout
ca
then if N is large enough one can unambiguously
decide the meaning of the central word.”
Warren Weaver (1949)
Core Concepts of LLMs
Context Windows – Understanding Model
Memory

Fenetre contextuelle :
nombre de tokens pris pour
comprendre le contexte
Core Concepts of LLMs
Context Windows – Understanding Model
Memory
Core Concepts of LLMs
Embeddings – How Models Understand
Context
Embeddings : comment les
models comprennent le
contexte, tout (docs, images,
audio) devient des nombres
dans l’espace
Core Concepts of LLMs
Embeddings – How Models Understand
Context
Core Concepts of LLMs
Embeddings – How Models Understand
Context
Core Concepts of LLMs
Embeddings – How Models Understand
Context
LLM Architecture
Transformer
Architecture

Pour générer, les


machines utilisent des
models transformer
(google en 2017). La
partie à gauche ( c’est
pour comprendre
(encodeur) et a droite
pour générer
(décodeur)
LLM Architecture
Transformer Architecture – Encoder
• Example Model
• BERT (Bidirectional Encoder Representations from Transformers)
• Key Features
• Architecture: Use only the encoder stack.
• Bidirectional Understanding: They read the input sequence in both directions (left-to-right
and right-to-left) to create deep contextual representations of text.
• Pre-training Objectives:
• Masked Language Modeling (MLM): Randomly masks words in the input sequence and trains the model
to predict the masked words, enabling deep understanding of word relationships.
• Next Sentence Prediction (NSP): Trains the model to understand sentence relationships, making it effective
for tasks like question answering.
• Applications
• Text Classification: Categorizing text into predefined labels, e.g., spam detection.
• Sentiment Analysis: Identifying sentiment (positive, negative, neutral) in text.
• Named Entity Recognition (NER): Detecting names, dates, locations, and other entities in text.
• Question Answering: Extracting precise answers from a given text passage.
LLM Architecture
Transformer Architecture – Decoder
• Example Models
• GPT-2, GPT-3, GPT-4
• Key Features
• Architecture: Use only the decoder stack.
• Autoregressive Nature: Generates text one word at a time, using previous words as
context, making it suitable for tasks that require sequential generation.
• Training Objective:
• Causal Language Modeling: Trains the model to predict the next word in a sequence based on
the preceding context, reinforcing a forward-generation approach.
• Applications
• Text Generation: Creating human-like text, including articles, stories, and dialogues.
• Creative Writing: Generating poetry, scripts, and other creative content.
• Summarization: Condensing long texts into concise summaries, emphasizing key points.
• Chatbots and Conversational AI: Engaging in dynamic conversations and responding
naturally to user inputs.
LLM Architecture
Transformer Architecture vs. RNN
Decoding Methods and AI
Detection Greedy Search
Decoding Methods and AI
Detection Beam Search
Decoding Methods and AI
Detection Top-K Sampling
Decoding Methods and AI
Detection Top-P Sampling
Decoding Methods and AI
Detection Identifying AI-
Generated Content
Decoding Methods and AI Detection
Case Study – Identifying ChatGPT-
Generated Content
Using
Large Language Models
Prompt Engineering
Techniques Zero-Shot
Prompts
• Definition
• Zero-shot prompting involves asking an AI model to perform a task without providing
any examples of how the task should be done.
• Key Characteristics
• No examples are given; the model relies on its pre-existing knowledge.
• Ideal for straightforward tasks with clear instructions.
• Advantages
• Quick and easy to implement.
• Useful when no relevant examples are available.
• Allows testing the model's inherent understanding and generalization capabilities.
• Examples
• "Translate the following sentence into French: 'Hello, how are you?’”
• "Summarize the following paragraph in one sentence."
Prompt Engineering
Techniques Few-Shot
Prompts
• Definition
• Few-shot prompting involves giving the model a small number of examples (typically 1-5) to
guide it in performing a task.
• Key Characteristics
• Provides a few examples that demonstrate the desired output.
• Enhances model performance by setting context and expectations.
• Advantages
• Helps clarify the task with concrete examples.
• Improves performance on tasks that require specific formatting or nuanced understanding.
• Effective for tasks that are slightly more complex or ambiguous.
• Example
• "Translate the following sentences into Spanish. Example 1: 'Good morning.' ->
'Buenos días.'Example 2: 'Thank you.' -> 'Gracias.’ Now translate: 'See you
tomorrow.'"
Prompt Engineering
Techniques Chain of
Thought
• Definition
• Chain of thought prompting encourages the model to break down complex reasoning or
problem-solving tasks into a step-by-step process.
• Key Characteristics
• The model is guided to think aloud or explain its reasoning.
• Useful for complex, multi-step problems where logical flow is important.
• Advantages
• Increases accuracy on reasoning tasks by making intermediate steps explicit.
• Helps models handle complex arithmetic, logical reasoning, and decision-making tasks.
• Example
• "Solve the following problem step-by-step: Problem: 'A train travels 60 miles per hour for 2
hours. How far does it travel?’ Step 1: Identify the speed of the train. Step 2: Identify the
time traveled. Step 3: Multiply speed by time to find the distance. Answer: The train travels
120 miles."
Prompt Engineering Techniques
Reasoning Models – OpenAI o1 Series
Models
Prompt Engineering
Techniques Best Practices
for Prompting
Prompt Engineering
Techniques Best Practices
for Prompting
Prompt Engineering
Techniques Best Practices
for Prompting
Prompt Engineering
Techniques Best Practices
for Prompting
Prompt Engineering
Techniques Best Practices
for Prompting
Retrieval-Augmented Generation
(RAG) Integrating External
Knowledge into LLMs
Retrieval-Augmented Generation
(RAG) Integrating External
Knowledge into LLMs
Retrieval-Augmented Generation
(RAG) Integrating External
Knowledge into LLMs
Retrieval-Augmented Generation
(RAG) Tools for Advanced
Information Retrieval
Retrieval-Augmented Generation
(RAG) Tools for Advanced
Information Retrieval
Retrieval-Augmented Generation
(RAG) Tools for Advanced
Information Retrieval
Synthetic Data Generation
Training, Testing, and Enhancing
Models
Synthetic Data Generation
Training, Testing, and Enhancing
Models
Synthetic Data Generation
Training, Testing, and Enhancing
Models
Synthetic Data Generation
Techniques for Creating Synthetic Data with
LLMs
Data Analysis and
Interpretation Analyzing Data
Data Analysis and
Interpretation Analyzing Data
Data Analysis and
Interpretation Analyzing Data
Data Analysis and
Interpretation Analyzing
Outputs
Neural Topic Modeling
BERTopic
Neural Topic Modeling
BERTopic
Neural Topic Modeling
BERTopic
Neural Topic Modeling
Classical vs. Modern Approaches
•Topic models for uncovering themes
• Powerful unsupervised tools for identifying common themes in text
• Latent Dirichlet Allocation (LDA) (Blei et al., 2003)
• Non-Negative Matrix Factorization (NMF) (Févotte and Idier, 2011)
• Documents represented as a "bag-of-words" and modeled as a mixture of
latent topics
•Limitations of conventional topic models
• Bag-of-words representations disregard semantic relationships
between words
• No consideration of word context within sentences
• Potential for inaccurate document representation
Neural Topic Modeling
Classical vs. Modern Approaches
•Emergence of text embedding techniques
• Rapid rise of text embeddings in NLP to address bag-of-words limitations
• Bidirectional Encoder Representations from Transformers (BERT) (Devlin et al.,
2018) and variations (Lee et al., 2020; Liu et al., 2019; Lan et al., 2019) generating contextual
word and sentence vector representations
• Semantic properties encoded, placing similar texts close in vector space
•Success of neural topic models
• Increasing success of neural networks in improving topic modeling techniques
(Terragni et al., 2021; Cao et al., 2015; Zhao et al., 2021; Larochelle and Lauly, 2012)
• Incorporation of word embeddings into traditional models like LDA (Liu et al.,
2015; Nguyen et al., 2015; Shi et al., 2017; Qiang et al., 2017)
• Recent surge in embedding-based topic modeling techniques (Bianchi et
al., 2020b; Dieng et al., 2020; Thompson and Mimno, 2020)
Neural Topic Modeling
Classical vs. Modern Approaches
•Embedding documents in BERTopic
• Documents embedded to create vector space representations for
semantic comparison
• Assumption: Documents with the same topic are semantically similar
•Sentence-BERT
• BERTopic uses Sentence-BERT (SBERT) for embedding (Reimers and Gurevych, 2019)
• Converts sentences and paragraphs to dense vector representations
using pre-trained language models
• Achieves state-of-the-art performance in sentence embedding tasks (Reimers
and Gurevych, 2020; Thakur et al., 2020)

•Embeddings for clustering, not topic generation


Neural Topic Modeling
Classical vs. Modern Approaches
Multimodal AI
Capabilities How It
Works
Multimodal AI Capabilities
Using Embeddings for Enhanced
Understanding
Multimodal AI Capabilities
Speech Recognition and Transcription
Multimodal AI Capabilities
Speech Recognition and Transcription
Multimodal AI Capabilities
Speech Recognition and Transcription
Multimodal AI Capabilities
Speech Recognition and Transcription
Multimodal AI
Capabilities Case
Study – Meta vs. EU
Training and Fine-
Tuning Large
Language Models
Fine-Tuning Large
Language Models
Customizing LLMs for
Specific Tasks
Fine-Tuning Large
Language Models
Customizing LLMs for
Specific Tasks
Fine-Tuning Large
Language Models
Customizing LLMs for
Specific Tasks
Fine-Tuning Large
Language Models
Customizing LLMs for
Specific Tasks
Fine-Tuning Large
Language Models
Customizing LLMs for
Specific Tasks
Fine-Tuning Large
Language Models
Customizing LLMs for
Specific Tasks
Fine-Tuning Large
Language Models
Customizing LLMs for
Specific Tasks
Fine-Tuning Large
Language Models
Customizing LLMs for
Specific Tasks
Fine-Tuning Large
Language Models
Customizing LLMs for
Specific Tasks
In the meantime, see the many ways in which journalists can take a Google Search to work safely, in which you can see the benefits of sharing your
work and collaborating with the community.
Google

The world has reached the point of no return, and a new plan for saving the world from global pandemics is still on the wane. Learn more →
We want to help. Please follow @Google on Twitter and let us know when you have so we can take this to DM. In the meantime, make sure your
recovery options are up-to-date for the future
How to get started with #AmazonMHX. Your Amazon Client Account can be updated from any time of the day through the @PrimeVideo page.
Learn more about joining the Amazon network. #YouBet can be found on Amazon.com, Amazon.co.uk, Amazon.ca, and Amazon.ca.
Amazon

A new report from the Climate Action Center shows that California will need more carbon-neutral and renewable energy, and will be more than a
carbon sink. For the first time in California history, all of our energy, our people and our planet are available for consumption. Learn more here:
What a fun way to connect with our friends on holiday break!
Watch Live: James and Paul discuss how we have created a better Facebook experience and how we can improve it in the coming weeks
It doesn't help to be on a team, or to be in a group. It can be frustrating when teams are constantly trying to get their bearings, and you never feel
Facebook

connected. But sometimes we feel connected in our communities, sharing moments of hope, determination, and strength.
It provides info on the apps that help people navigate safely, stay connected and communicate. It was built to help keep people safe. It was the
first of its kind in the US and the world, and the results were a huge boost. Find out more in our Help Center
We recognize your value inclusivity, but we do not tolerate hate speech or abusive comments. If you feel you can use our tools to protect yourself.
You: "I'm going to give a special presentation to this year's winners of the annual AI Challenge for the US. I'll be using @Budapest and @Xbox as my
voices will be heard." Microsoft: "Your presentation has been cancelled."
Microsoft Canada chief executive Dr. Jeff Sheppard is introducing the new feature in #MSFT. It's the first major update to help address issues like
COVID-19, and the need for digital learning. #MSInspire
Fine-Tuning Large Language Models
Fine-Tuning for Enhanced
Representations
Fine-Tuning Large Language Models
Fine-Tuning for Enhanced
Representations
English Edition
(2018-2024)
Fine-Tuning Large Language Models
Fine-Tuning for Enhanced
Representations
Fine-Tuning Large Language Models
Fine-Tuning for Enhanced
Representations
Fine-Tuning Large Language Models
Fine-Tuning for Enhanced
Representations
Fine-Tuning Large Language Models
Fine-Tuning for Enhanced
Representations
Fine-Tuning Large Language Models
Fine-Tuning for Enhanced
Representations
Fine-Tuning Large Language Models
Fine-Tuning for Enhanced
Representations
Fine-Tuning Large Language Models
Fine-Tuning for Enhanced
Representations

English Edition
(2018-2024)
Marketing Automation with LLMs
Using AI for Search Engine Optimization
Marketing Automation with LLMs
Using AI for Search Engine Optimization
Marketing Automation with LLMs
Using AI for Search Engine Optimization
Marketing Automation with LLMs
Using AI for Search Engine Optimization
Marketing Automation with LLMs
Using AI for Search Engine Optimization
Marketing Automation with LLMs
Implementing LLMs in Marketing
Strategies
Marketing Automation with LLMs
Implementing LLMs in Marketing
Strategies
• Supervising a research Master’s thesis in marketing automation focusing
on interconnected AI
• Objective
• Develop an advanced marketing automation system using AI techniques
• Methodology
• Use social media scraping to gather data for analysis and model training
• Neural topic modeling
• Apply unsupervised machine learning algorithms to identify key topics in the collected data
• Fine-tuning LLMs on engaging topics
• Train LLMs using engaging topic data to generate high-quality social media posts.
• Evaluation
• Assess the effectiveness of the generated content through engagement metrics such as
likes, shares, and comments.
Ethical Considerations in Training LLM
Addressing Bias and Fairness in Model
Training
Ethical Considerations in Training LLM
Addressing Bias and Fairness in Model
Training
Ethical Considerations in Training LLM
Addressing Bias and Fairness in Model
Training
Ethical Considerations in
Training LLM Guidelines for
Responsible AI
Ethical Considerations in
Training LLM Case Study –
Impossible?
The Future of AI
Emerging Trends in
AI Will We Run Out
of Data?
Emerging Trends in
AI Will We Run Out
of Data?
Emerging Trends in
AI Will We Run Out
of Data?
Emerging Trends
in AI Small
Language Models
Emerging Trends
in AI Small
Language Models
Emerging Trends
in AI Small
Language Models
Emerging Trends in AI
Federated Learning and Privacy-
Preserving AI
Emerging Trends in AI
Federated Learning and Privacy-
Preserving AI
Emerging Trends in AI
LLM-based Intelligent Agents
Emerging Trends in AI
LLM-based Intelligent Agents
Emerging Trends in AI
LLM-based Intelligent Agents
Emerging Trends in AI
Explainable AI
Emerging Trends in AI
Explainable AI
Emerging Trends in AI
Explainable AI
Emerging Trends in AI
Case Study – Why AI Models Are
Collapsing
AI for Marketing
Julien Cloarec
Full Professor of Quantitative Marketing

/juliencloarec

You might also like