AI Walkthrough Series
Exploring AI-LLM:
A Low-Math, Low-Code Approach
Part 1 - 2
27 Sept 2024
Doddy Widanto
Sr. Solution Engineer, F5
Why do I do this?
• Hype is often the enemy of true understanding
• To cut short people’s learning journey towards AI-LLM
• To “re-claim” what AI-LLM can/can’t do
2 © 2022
Agenda
• Disclaimer Bonus chapter
• Why we learn AI? • Sentiment analysis (wip)
• Part 1: Uncover AI Myths • LLM monitoring with Otel (wip)
• Part 2: Recap - Neural Network • Are LLMs intelligence? (wip)
• Part 3: GPT & Embeddings
• Part 4: Attention is all you need (?)
• Part 5: Multi Layer Perceptron (MLP), the place where
the facts live?
• Part 6: Large Language Model (LLM)
• Part 7: Retrieval Augmented Generation (RAG)
• Part 8: GPU powers AI
• Part 9: LLM in practical (demo?)
• Part 10: LLM APIs
• Part 11: NVIDIA AI
• Part 12: GPU as a Service (GPUaaS)
• Part 13: LLM as a Service
• Part 14: Why should we care?
3 © 2022
Disclaimer
• For the sake of understanding…
• Maths will be minimized
• Codes will be minimized
• Slides/animation is snapshot/cut from anywhere, shamelessly!
• Understanding and application practicality of AI & LLMs are the main goal
4 © 2022
Why we learn AI-LLM?
• Trends
• Uncover myths/hypes
• Career?
• Other philosophical reasons..
• <Don’t know why I need this>
→ We can use AI-LLM better if we understand what happen under the hood.
5 © 2022
Part 1:
Uncover AI Myths
6 © 2022
AI Myths
https://www.gartner.com/smarterwithgartner/5-ai-myths-debunked
https://ai.google/static/documents/exploring-6-myths.pdf
7 © 2022
More AI Myths?
8 © 2022
We watch too many AI movies…
9 © 2022
AI-LLM Taxonomy
• Artificial Intelligence: the idea of a machine that can mimic
human intelligence
• Machine Learning: teach a machine how to perform specific
task and provide accurate result by identifying patterns
• Deep Learning: subset of machine learning that uses artificial
neuro networks to mimic the learning process of the learning
process of the human brain. Deep learning focus on learning
from large amounts of data in order to predict or classify
something
• Generative AI: concentrates on producing new content that
mimics real data based on patterns in existing data
• Large Language Model (LLM): deep neural networks
programmed to produce or comprehend language using
enormous amount of text data
https://www.researchgate.net/figure/LLMs-within-the-AI-taxonomy-LLMs-
exist-as-a-subset-of-deep-learning-models-which-are-a_fig1_378394229
10 © 2022
Additional terminologies
• Natural Language Processing (NLP): vast research into methods for creating, interpreting, and comprehending human
language. E.g. text classification, language translation, summarization, question answering
• Neural Network: underlying technology in deep learning, consist of interconnected neurons (nodes) in a layered structure
11 © 2022
AI Mind Map
12 © 2022 https://medium.com/ml-ai-study-group/ai-mind-map-a70dafcf5a48
AI/ML vs Data Science
https://python.plainenglish.io/ai-data-science-and-
machine-learning-unraveling-the-connection-
41714cb3443b
16 © 2022
Part 2:
Recap – Neural Network
17 © 2022
18 © 2022
19 © 2022
20 © 2022
21 © 2022
22 © 2022
23 © 2022
24 © 2022
25 © 2022
26 © 2022
27 © 2022
28 © 2022
29 © 2022
30 © 2022
31 © 2022
32 © 2022
33 © 2022
34 © 2022
35 © 2022
36 © 2022
37 © 2022
38 © 2022
39 © 2022
40 © 2022
41 © 2022
42 © 2022
43 © 2022
44 © 2022
45 © 2022
46 © 2022
47 © 2022
48 © 2022
49 © 2022
50 © 2022
51 © 2022
52 © 2022
53 © 2022
54 © 2022
55 © 2022
56 © 2022
57 © 2022
58 © 2022
59 © 2022
Back
Propagation
60 © 2022
61 © 2022
62 © 2022
63 © 2022
64 © 2022
65 © 2022
66 © 2022
67 © 2022
68 © 2022
69 © 2022
70 © 2022
Some types of Neural Networks
1. Feedforward Neural Networks (FNN): Basic neural networks with no cycles; includes Multi-Layer Perceptrons (MLPs).
2. Convolutional Neural Networks (CNN): Specialized for image and spatial data processing using convolutional layers.
3. Recurrent Neural Networks (RNN): Designed for sequential data, with variants like LSTM and GRU for handling long-term dependencies.
4. Radial Basis Function Networks (RBFN): Uses radial basis functions; suitable for function approximation and time-series prediction.
5. Autoencoders: Unsupervised networks for data compression and reconstruction; includes denoising and variational autoencoders.
6. Generative Adversarial Networks (GANs): Comprise generator and discriminator networks; used for generating synthetic data.
7. Self-Organizing Maps (SOM): Used for clustering and visualizing high-dimensional data into lower dimensions.
8. Deep Belief Networks (DBN): Layered networks for unsupervised learning and feature extraction.
9. Modular Neural Networks: Multiple independent networks working together for complex tasks.
10. Transformer Networks: Utilize self-attention for handling sequences; widely used in NLP and sequence tasks.
71 © 2022
Convolutional Neural Network (CNN)
• Type of data: image
• Advantage: high accuracy in image recognition problem
72 © 2022 https://www.softwebsolutions.com/resources/difference-between-cnn-rnn-ann.html
Recurrent Neural Network
• Type of data: sequence
• Advantage: remember every information, time series prediction
73 © 2022 https://www.softwebsolutions.com/resources/difference-between-cnn-rnn-ann.html
Feedback
75 © 2022