Large Language Models and Prompt Engineering
Duration: 40 hours, 5 days
Course Introduction:
Welcome to Large Language Models and Prompt Engineering, a comprehensive course designed to immerse
you in the dynamic and transformative field of LLMs. Over the span of five days, you'll explore the principles and
architecture of LLMs, specifically focusing on Models from Cohere, Meta LLAMA 3 and the Oracle Cloud
Infrastructure (OCI) Generative AI Service. You'll gain hands-on experience with fine-tuning, customizing, and
deploying LLMs for various applications, including text classification, sentiment analysis, and question
answering.
This course is tailored for data scientists, machine learning engineers, software developers, and researchers who
aspire to leverage state-of-the-art LLMs in their NLP projects. Through a combination of lectures, practical
sessions, and real-world case studies, you will not only enhance your technical skills but also delve into ethical
considerations and responsible AI practices. By the end of this course, you'll be equipped with the knowledge
and tools to deploy robust, scalable, and ethical LLM-based applications.
Objectives:
• Understand the principles and architecture of Large Language Models (LLMs)
• Explore the capabilities and limitations of LLMs in natural language understanding and generation tasks
• Gain proficiency in fine-tuning and customizing LLMs for specific NLP tasks
• Explore practical applications of LLMs in text classification, sentiment analysis, question answering, and more
• Learn best practices for deploying LLM-based models in production environments
Prerequisites:
• Programming Skills: Working Knowledge of Python.
• Familiarity with Machine Learning: Understanding of fundamental machine learning
concepts, neural networks, and training models.
• Mathematics and Statistics: Basic understanding of linear algebra, calculus, probability, and
statistics.
• Data Handling Skills: Knowledge of data manipulation, pre-processing, and data visualization
techniques.
Target Audience:
• AI Enthusiasts.
• Software Developers.
• Data Scientists/Engineers.
• Tech Professionals and Innovators.
• Entrepreneurs/Managers.
Next Suggested Course:
• Mastering Generative AI and Prompt Engineering
What you will learn:
Part 1. Building Blocks of Large Language Models and Prompt Engg
Day 1
1. Introduction to AI and Generative AI
• Artificial Intelligence and Machine Learning
• An Overview of Generative AI
• Mechanics of Generative AI
• Discriminative and Generative Models
Lab 1. Setting Up the Environment
2. Introduction to Large Language Models (LLMs)
• Overview of LLMs and their Significance
• Understanding the Architecture
• Components of LLMs
• Exploring Pre-trained LLMs and their applications
3. Prompt Engineering for Generative AI
• Introduction to Prompt Engineering
• Principles, Techniques and Best Practices
• Zero Shot, One Shot, Few Shot Prompts
• Tokens, Max Tokens and Temperature
• Chain of Thoughts
• Formatting, Summarizing and Inferring Prompts
• Retrieval Augmented Generation - RAG
Lab 2 Getting Started with Prompting
Day 2
4. Generative AI Applications
• Text Based Applications
• Image-based Applications
• Video Generation
• Audio Applications
• Generative AI Ecosystem
• ChatGPT, OPENAI,
• Cohere Command-R-Plus
• Google’s BARD, Hugging Face,
• LaMDA, LLaMA-3,
• Stable Diffusion and DALLE-3.
Lab 3. Generating Images using Stable Diffusion
5. Understanding Large Language Models
• Large Language Models
• Transformers, Sequence Models, RNN, Encoder - Decoder
• Embeddings, Tokenization
Lab 4 Prompt Engineering - Summarizing and Inferring
6. Large Language Models from Cohere
• Introduction to Cohere and its LLMs
• Accessing pre-trained LLMs using the Cohere API
• Understanding Cohere's Offerings
• Fine-tuning LLMs
• Deploying LLMs.
Lab 5. Working with Cohere Playground
7. LLM Fine Tuning and RAG
• Using Large Language Models
• Fine Tuning the Model
• Retrieval Augmented Generation
• Controlling Hallucinations
Lab 6. Cohere Model Fine Tuning for Chat
Day 3
8. Open Source LLM Ecosystem
• Open source LLM Ecosystem
• Deep Dive into Meta Llama 2, Falcon
• Leveraging Models from Hugging Face
Lab 7. Working with Meta LLAMA-3
Part 2: Frameworks and Modules
9. LangChain Ecosystem
• LangChain Ecosystem
• Langchain Concepts
• Using Multiple Chains
• Working with Chains
Lab 8. Getting Started with LangChain
10. LlamaIndex and Its usage
• LlamaIndex and Its usage
Lab 9 Getting started with LlamaIndex
11. Working with Memory and Agents
• Memory and Agents
• Haystack and Its usage
Lab 10. Working with Memory and Haystack with Cohere.
Day 4
12. Vector Databases and Embedding Techniques
• Vector Databases
• Working with Embedding
• Embedding Models
• Capabilities and Benefits
• Embedding for Image/Text
Lab 11. Working with Chroma Vector Database
Part 3. A Developer Day
13. Generative AI Legal, Privacy, Security Concerns
• Concerns around Legal, privacy, Security Concerns
• Concerns around IP
• Responsible AI
• Enterprise Best Practices
14. Generative AI - for Software Engineering
• Leveraging Generative AI to Improve Quality and Productivity in Software Engineering
• Github Copilot
• Prompts for Code Generation
• Prompts for Test Case Generation
• Prompts for Code Translation
Lab 12. Generating Code and Unit Tests with Generative AI
15. Building Applications Using Generative AI
• Role of Developers as Consumers of Generative AI APIs.
• Building Applications That Leverage Generative AI Outputs.
• Session & Chat History Management Best Practices
• Framework for Output Validation & Continuous Improvement of Prompts
• Deployment Options and Best Practices.
Lab 13 Brainstorming Story Ideas with Cohere and Stable Diffusion
Part 4. Case Studies and Project
Day 5
16. Building Application using Gen AI - Case Studies
• Practical Case study using OpenAI
• Build a Q&A Chain ChatBot for HTML/PDF Documents. - Leveraging RAG
• Domain Specific ChatBOT - Fine Tuning of Models
• Automate AI Workflows with AutoGPT & LangChain
• Use Dall-E to Generate Images from Text
Lab 14. Domain specific Chat BOT - Fine tuning of Models
17. Document Insights Extraction
• Named Entity Recognition with SpaCy
• Article Recommender with Extraction with Generative AI
Lab 15 Named Entity Recognition with SpaCy
18. Solution Architecture for Gen AI
• Solution Guidelines - Well-Architected Principles
• Chat Session Management
• Standard Architectures for Various Use Cases
• Manage Token Limitations
• Deployment Standards – Cloud V/s On-Premise.
• LocalChat
Lab 16. Working with LocalChat