0% found this document useful (0 votes)
44 views15 pages

Module 1 Notes-AI

The document outlines a course on Introduction to AI and its Applications at RV Institute of Technology and Management, detailing the definition, history, types, and functionalities of artificial intelligence. It discusses the advantages and disadvantages of AI, as well as the distinctions between human and machine intelligence. Additionally, it covers key concepts such as machine learning, deep learning, and the structure of intelligent agents.

Uploaded by

adithi9945
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
44 views15 pages

Module 1 Notes-AI

The document outlines a course on Introduction to AI and its Applications at RV Institute of Technology and Management, detailing the definition, history, types, and functionalities of artificial intelligence. It discusses the advantages and disadvantages of AI, as well as the distinctions between human and machine intelligence. Additionally, it covers key concepts such as machine learning, deep learning, and the structure of intelligent agents.

Uploaded by

adithi9945
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd

RV Institute of Technology and

Management®

Rashtreeya Sikshana Samithi Trust


RV Institute of Technology and Management ®
(Affiliated to VTU, Belagavi)
JP Nagar, Bengaluru – 560076

Department of Computer Science and Engineering

Course Name: Introduction to AI and its Applications


Course Code: 1BAIA103/203
I Semester 2025 Scheme
Prepared By:
1. Prof Padmasree N
Assistant Professor, Department of Computer Science and
Engineering, RVITM, Bengaluru – 560076
Email: [email protected]

2. Prof. Uppin Rashmi


Assistant Professor, Department of Computer Science and
Engineering, RVITM, Bengaluru – 560076
Email: [email protected]

3. Prof. Manjusha Kulkarni


Assistant Professor, Department of Computer Science and
Engineering, RVITM, Bengaluru – 560076
Email: [email protected]
RV Institute of Technology and
Management®

Chapter 1

Introduction to Artificial Intelligence

1.1 What is Artificial Intelligence?

• Definition: Artificial intelligence (AI) is the science and engineering of making intelligent machines,
particularly intelligent computer programs.
• Layman's View: AI means intelligence demonstrated by machines that mimic human actions, learn
from experiences, adjust to new inputs, and perform human-like tasks.
• Researcher's View: AI is a set of algorithms that generates results without explicit instructions, making
machines capable of thinking and acting rationally and humanely.
• Core Tasks: AI applications perform specialized tasks by processing large amounts of data and
recognizing patterns. They can learn from experience, recognize objects, understand/respond to language,
and make decisions to solve real-world problems.
• Goal: To build machines and algorithms capable of performing computational tasks that typically
require human-like brain functions.
• NITI Aayog Definition: AI refers to the ability of machines to perform cognitive tasks like thinking,
perceiving, learning, problem-solving, and decision-making. It has evolved beyond mimicking human
intelligence, enabling intelligent systems to take over tasks, enhance connectivity, and improve
productivity.

1.1.1 How Does AI Work?


• AI systems operate effectively when given large amounts of labelled training data5.
• This data is analysed to discover correlations and patterns, which are then used to make predictions
about future states.
• Example: A chatbot learns to converse with humans by being fed examples of text chats.
• AI programming focuses on three cognitive skills5
◦ Learning Processes: Acquiring data and creating rules (algorithms) to turn data into actionable
information.
◦ Reasoning Processes: Choosing the right algorithm for the desired outcome6.
◦ Self-Correction Processes: Continuously enhancing algorithms for accurate results6.
• Importance: AI allows businesses to gain insights, automate redundant jobs, reduce costs, and increase
revenue....

1.1.2 Advantages and Disadvantages of Artificial Intelligence


RV Institute of Technology and
Management®

• Advantages:
◦ Performs well on tasks using detailed data.
◦ Takes less time to process huge data volumes.
◦ Generates consistent and accurate results.
◦ Can be used 24x79.
◦ Optimises tasks by better utilising resources.
◦ Automates complex processes.
◦ Minimises downtime by predicting maintenance needs.
◦ Enables new, better quality, and faster product production.
• Disadvantages:
◦ Involves more cost.
◦ Requires technical expertise to develop and use AI applications.
◦ Lack of trained professionals.
◦ Incomplete or inaccurate data can lead to disastrous results.
◦ Lacks the capability to generalise tasks.
1.2 History of Artificial Intelligence
• 1943: Warren McCullough and Walter Pitts proposed the first mathematical model for a neural network.
• 1950:
◦ Alan Turing demonstrated the Turing Test.
◦ Marvin Minsky and Dean Edmonds built the first neural network computer.
◦ Claude Shannon published on programming a computer for chess.
◦ Isaac Asimov published "Three Laws of Robotics".
• 1952: Arthur Samuel developed a self-learning program to play checkers
• 1954: IBM computer translated 60 Russian sentences into English.
• 1956: John McCarthy coined the term ‘artificial intelligence’ and discussions at a conference marked
AI's birth.
• 1958: John McCarthy developed the AI programming language Lisp.
• 1959: Allen Newell, Herbert Simon, and J.C. Shaw developed the General Problem Solver (GPS);
Arthur Samuel coined ‘machine learning’.
• 1963: John McCarthy started the AI Lab at Stanford.
• 1966: Joseph Weizenbaum developed ELIZA, an early natural language processing program.
• 1969: First successful expert system for diagnosing blood infections developed at Stanford.
• 1972: Logic programming language PROLOG was created.
• 1974-1980: The 'First AI Winter' due to DARPA funding cutbacks.
• 1980: DEC developed R1, the first successful commercial expert system.
• 1982: Japan launched the Fifth Generation Computer Systems (FGCS) project.
• 2008: Google introduced speech recognition in its iPhone app.
• 2011: Apple released Siri, an AI-powered virtual assistant
• 2012: Andrew Ng's Google Brain project enabled a neural network to recognise a cat from million
YouTube videos.
RV Institute of Technology and
Management®

• 2014: Google's first self-driving car passed a state driving test; Amazon released Alexa.
• 2015: Baidu’s Minwa supercomputer achieved higher image identification accuracy than humans using a
convolutional neural network.
• 2016: Google DeepMind’s AlphaGo defeated world champion Go player Lee Sedol; Sophia, the first
'robot citizen', was created.
• 2018: Google released NLP engine BERT; Waymo One self-driving service launched.
• 2020: Baidu released its LinearFold AI algorithm for SARS-CoV-2 vaccine development.
1.3 Types of Artificial Intelligence AI systems can be classified into two main categories and four
functional groups1819:
• Weak AI (Narrow AI)1820:
◦ Designed to perform a specific task.
◦ Examples: Siri, Alexa, weather prediction, stock price optimisation, Google search, image
recognition, self-driving cars, IBM Watson1820.
◦ Operates within a limited context and is the most successful AI realisation to date.
• Strong AI (Artificial General Intelligence - AGI, or Artificial Super Intelligence - ASI)2021:
◦ Aims to resemble the human brain, utilising cognitive skills and fuzzy logic to perform tasks for
which it was not explicitly trained.
◦ Requires capabilities like visual perception, speech recognition, decision-making, and language
translation.
◦ Currently seen only in sci-fi movies, but believed that ASI will surpass human intelligence
• Four Groups Based on Functionality:
◦ Reactive Machines1922:
▪ Very basic, with no memory to store past experiences for future actions.
▪ Perceive the world and react to it.
▪ Example: IBM’s Deep Blue (defeated chess grandmaster Kasparov).
▪ Cannot improve with practice19.
▪ Limited number of specialized tasks, trustworthy, and reliable.
▪ Google's AlphaGo is also a game-playing reactive machine, but evaluates future moves using
neural networks.
◦ Limited Memory...:
▪ Retain data for a short period and use it for a limited time.
▪ Cannot permanently add data to an experience library.
▪ Used in autonomous vehicles (e.g., storing recent speed of nearby cars, distance between cars,
speed limits).
▪ More complex than reactive machines, continuously train models, and improve with feedback.
▪ Major machine learning models applying limited memory AI:
• Reinforcement Learning: Continuous learning via trial-and-error for better predictions.
• Long Short-term Memory (LSTM): Uses past data to predict the next item in a sequence,
prioritising recent information25.
• Evolutionary Generative Adversarial Networks (E-GAN): Evolves to explore new ways of
RV Institute of Technology and
Management®

utilising past experiences for new decisions, using simulations and statistics.
◦ Theory of Mind:
▪ Focuses on imitating the human brain by forming representations about the world, including
thoughts, emotions, and memories26.
▪ Currently theoretical, but may become reality26.
▪ Machines would make decisions considering feelings from self-reflection and determination.
◦ Self-Awareness:
▪ The next step after Theory of Mind, incorporating human-level consciousness27.
▪ Systems would understand their own existence and use that information to deduce others' feelings.
▪ Could interpret user's feelings from both explicit communication and manner of communication.
▪ Knowledge of conscious context would enable responses to events.
▪ Does not yet exist.
1.4 Is Artificial Intelligence Same as Augmented Intelligence and Cognitive Computing?
• Augmented Intelligence:
◦ Weak AI that simply improves products and services.
◦ Example: Automatically highlighting vital information in business.
• Artificial Intelligence (True AI/Strong AI/AGI):
◦ The future AI that would far surpass the human brain's ability.
◦ Currently largely in the realm of science fiction, but technologies like quantum computing could make
it a reality.
◦ In reference to machines, it involves simulating how humans sense, learn, process, and react to
information.
• Cognitive Computing:
◦ Term for products and services that mimic and augment human thought processes.
1.5 Machine Learning and Deep Learning
• Machine Learning (ML):
◦ A branch of computer science that analyses data and identifies patterns to teach a machine to deduce
results and make decisions without human intervention.
◦ ML algorithms learn from experiences rather than explicit instructions.
◦ Automatically learn and improve by analysing datasets and comparing output, repeating the process
until accuracy improves.
◦ Enables machines to make data-driven decisions rather than being explicitly programmed for every
task.
◦ Relationship with AI: ML is an application and a subset of AI... AI is the superset, and ML is a
way to achieve AI.
• Traditional Programming vs. Machine Learning...:
◦ Traditional Programming: Manually creating a program with explicit rules/code in procedural
languages (e.g., C, Java, Python) that accepts data and returns output.
◦ Machine Learning: Automated process where algorithms automatically formulate rules from data,
adding embedded analytics (e.g., natural language interfaces, outlier detection, recommendations)35. Uses
RV Institute of Technology and
Management®

pre-written algorithms that learn how to solve problems themselves.


◦ Capabilities: Different, but ML supplements conventional programming (e.g., ML for predictive
algorithms, traditional for UI design or data visualisation).
• Deep Learning (DL):
◦ An advanced machine learning technique that processes data inputs through multiple layers of
biologically-inspired neural networks.
◦ These hidden layers allow machines to learn "deeply," making connections and weighting inputs for
best results.
◦ Relationship with ML: DL is an advanced ML technique and a subset of ML3940.
• How AI Works (features)...:
◦ Autonomous: Makes independent decisions without human intervention, learning through input data
and past experiences.
◦ Predict and Adapt: Understands data patterns for decisions and predictions.
◦ Continuously Learns: Learns from patterns in data.
◦ Reactive: Perceives a problem and acts on perception
◦ Data Driven: Rise of data-centric AI systems due to cheaper data storage, fast processors, and
sophisticated deep learning algorithms.
◦ Accurate Predictions: Outperformed humans based on past experiences, success depends on
correctly labelled large datasets.
◦ Futuristic: Scope continuously expanding.

Chapter 3: Artificially Intelligent Machine


3.1 Defining Intelligence
• Howard Gardner's Categories of Intelligence...:
◦ Linguistic intelligence: Ability to speak, recognise, and use phonology, syntax, and semantics
(narrators, orators).
◦ Musical intelligence: Ability to create, communicate with, and understand sounds, pitch, rhythm
(musicians, composers).
◦ Logical-mathematical intelligence: Ability to use and understand complex, abstract ideas
(mathematicians, scientists).
◦ Spatial intelligence: Ability to perceive visual/spatial information, change, and recreate visual images
(map readers, astronauts, physicists)
◦ Bodily-kinesthetic intelligence: Ability to use the body to solve problems or manipulate objects
(players, dancers).
◦ Intrapersonal intelligence: Ability to distinguish one's own feelings, intentions, and motivations
(Gautam Buddha).
◦ Interpersonal intelligence: Ability to recognise and differentiate others' feelings, beliefs, and
intentions (mass communicators, interviewers).
• A machine is artificially intelligent if it exhibits at least one, and at most all, of these intelligences.
RV Institute of Technology and
Management®

3.2 Components of Intelligence Intelligence is composed of:


• Reasoning: Processes used in making decisions and predictions. Two types: Inductive and
Deductive4849.
◦ Inductive Reasoning: Uses specific observations to make broad general statements; conclusion is
likely but not certain.
◦ Deductive Reasoning: Uses general premises to reach specific, certain conclusions49.
• Learning: Gaining knowledge or skill through studying, practicing, being taught, or experiencing49.
Ability to improve awareness
◦ Auditory learning: By listening.
◦ Episodic learning: Remembering sequences of events.
◦ Motor learning: By precise muscle movement.
◦ Observational learning: By watching and imitating.
◦ Perceptual learning: By recognising stimuli seen before.
◦ Relational learning: Differentiating stimuli based on relational properties.
◦ Spatial learning: Through visual stimuli.
◦ Stimulus-response learning: Performing behaviour when stimulus is received.
• Problem Solving: Finding a desired solution when the path is blocked by hurdles, using decision-
making.
• Perception: Acquiring, interpreting, selecting, and organizing sensory information. Humans use sensory
organs, AI uses data from sensors.
• Linguistic intelligence: Ability to use, comprehend, speak, and write verbal/written language in
interpersonal communication.
3.3 Differences Between Human and Machine Intelligence
• Perception: Humans perceive by patterns; machines analyse data with respect to rules.
• Information Storage/Recall: Humans use patterns; machines use searching algorithms.
• Handling Missing/Distorted Information: Humans can deduce missing/distorted info; machines lack
this ability with high accuracy.
3.4 Agent and Environment
• An AI system comprises an agent and its environment.
• Agent: Anything that makes decisions and is capable of perceiving its environment (e.g., a person, firm,
machine, software).
• Sensors: Help agents perceive their environment.
• Effectors: Help agents act upon their environment.
• Types of Agents:
◦ Human agent: Sensory organs as sensors, hands/legs/mouth as effectors.
◦ Robotic agent: Cameras/infrared range finders as sensors, motors/actuators as effectors.
◦ Software agent: Bit strings as programs and actions.
3.4.1 Key Terminology
• Performance Measure of Agent: Criteria determining agent's success.
• Behaviour of Agent: Action performed after receiving a percept.
RV Institute of Technology and
Management®

• Percept: Perceptual inputs given to an agent at a specific instance.


• Percept Sequence: List of all percepts received by an agent till date.
• Agent Function: A map from the percept sequence to an action.
3.4.2 Rationality
• Rationality: Inculcates responsibility, sensibility, and judgment, empowering the agent to perform
expected actions after perceiving.
• Dependence: Rationality depends on:
◦ Agent’s performance measure.
◦ Agent’s percept sequence so far.
◦ Agent’s prior knowledge about the environment.
◦ Agent’s possible actions.
• A rational agent performs actions to maximise its performance.
• A problem solved by an agent is characterised by Performance, Environment, Actuators, and Sensors
(PEAS).
3.4.3 Structure of Intelligent Agents
• Agent = Architecture + Agent Program.
◦ Architecture: The machinery on which an agent works.
◦ Agent Program: Implementation of an agent function.
3.4.4 Types of Agents
• Simple Reflex Agents
◦ Choose actions based only on the current percept.
◦ Rational only if environment is completely observable.
◦ Work using condition-action rules (If condition then action).
◦ Problems: Limited intelligence, no knowledge of past states, updates required for environment
changes, can get stuck in infinite loops in partially observable environments.
• Model-Based Reflex Agents:
◦ Use a model of the world to choose actions, maintaining an internal state.
◦ Model: Knowledge about "how things happen in the world".
◦ Internal state represents unobserved aspects of the current state based on percept history.
◦ Requires information on how the world evolves and how agent actions affect it.
• Goal-Based Agents:
◦ Choose actions to achieve specific goals.
◦ Offers more flexibility than reflex agents as knowledge is explicitly modelled and modifiable.
• Utility-Based Agents:
◦ Used when goals conflict or are difficult to achieve.
◦ Choose actions based on a preference (utility) for each state, maximising expected utility or
'happiness'.
• Learning Agent...:
◦ Learns from past experiences.
◦ Starts with basic knowledge and adapts automatically through learning.
RV Institute of Technology and
Management®

◦ Four main conceptual components:


▪ Learning element: Responsible for improvements by learning from the environment.
▪ Critic: Provides feedback to the learning element, evaluating performance against a standard.
▪ Performance element: Chooses external action.
▪ Problem generator: Suggests actions for new and informative experiences.
3.4.5 The Nature of Environments
• AI programs can operate in confined or unlimited domains.
• Softbots: Software agents operating in detailed, complex environments, choosing actions in real-time.
• Turing Test Environment: Artificial agents tested on equal ground with real agents to determine
intelligent behaviour.
◦ Turing Test: A human interrogator interacts via typing with a human and a machine in separate
rooms. If the interrogator cannot distinguish the machine from the human, the machine is considered
intelligent.
3.4.6 Types of Environments Environments can be categorised by dimensions:
• Discrete/Continuous:
◦ Discrete: Limited, distinct, clearly defined states (e.g., chess game).
◦ Continuous: Infinite or large number of states (e.g., self-driving car).
• Observable/Partially Observable/Unobservable:
◦ Observable: Agent can determine the complete state from percepts at each time point.
◦ Partially Observable: Agent cannot determine complete state (e.g., due to noise, inaccuracy, missing
data, or task framework).
◦ Unobservable: Agent has no sensors.
◦ Fully observable environments do not need to maintain internal state; partially observable ones do.
◦ Example: Classic chess is fully observable; Kriegspiel chess is partially observable68.
• Accessible/Inaccessible:
◦ Accessible: Agent's sensory apparatus has access to complete state of environment (e.g., an empty
room's temperature)
◦ Inaccessible: Complete and accurate information about state is not obtainable (e.g., Earth event
information)
• Episodic/Non-episodic (Sequential):
◦ Episodic: Each episode involves perceiving and acting; action quality depends only on that episode;
subsequent episodes are independent. Simpler as agent doesn't need to think ahead.
◦ Non-episodic (Sequential): Agent requires memory to store past actions to determine next best
actions; current decisions affect future decisions.
3.5 Search
• AI agents use search algorithms to achieve tasks, especially in single-player games (e.g., Sudoku,
crossword).
• A search problem consists of:
◦ State space: Set of all possible states an agent can attain.
◦ Start state: Where searching begins.
RV Institute of Technology and
Management®

◦ Goal test: Function to check if current state is the goal state.


◦ Solution: Sequence of actions (plan) transforming start to goal state, realised by search algorithms.
3.5.1 Types of Search Algorithms
• Categorised as informed and uninformed.
3.5.2 Properties of Search Algorithms
• Completeness: Returns at least one solution if one exists.
• Optimality: Returns the best solution (lowest path cost).
• Time and space complexity: Time taken and maximum storage required.
3.6 Uninformed Search Algorithms (Blind Search)
• Have no additional information about the goal state beyond the problem definition.
• Only know how to traverse or visit nodes.
• Blindly follow techniques regardless of efficiency.
• Information for each algorithm: Problem graph, Strategy, Fringe (data structure for possible states),
Tree, Solution plan.
3.6.1 Depth First Search (DFS)
• Strategy: Explores as far as possible along each branch before backtracking.
• Working: Starts from root, traverses deepest node first.
• Implementation: Uses a stack data structure.
• Advantages: Memory efficient, faster execution, terminates in finite time.
• Disadvantages: Incomplete (may not find solution even if it exists due to limit constraint), not optimal if
multiple solutions exist (may not find the best one), cannot check duplicate nodes.
• Complexity: Time and space complexity depend on path length and depth.
3.6.2 Depth-Limited Search Algorithm (DLS)
• Extension of DFS: Introduces a depth limit (ℓ) to prevent infinite loops.
• Working: Nodes at the depth limit are treated as leaf nodes.
• Advantages: Memory efficient, faster execution, terminates in finite time.
• Disadvantages: Incomplete (if solution is beyond limit), not optimal (may not find best solution even if
ℓ > d).
3.6.3 Breadth First Search (BFS)
• Strategy: Traverses the graph layer-wise (breadth-wise)....
• Working: Starts from root, explores all neighbour nodes at the present depth before moving to the next
level.
• Implementation: Uses a FIFO queue.
• Advantages: Finds a solution if it exists, finds the optimal solution if step cost is uniform8182.
• Disadvantages: High time and space complexity.
• Applications: Crawlers in Search Engines, GPS Navigation Systems, Finding Shortest Path for
Unweighted Graphs
3.6.4 Uniform Cost Search (UCS)
• Strategy: Finds an optimal solution when step costs are not the same
• Working: Computes the cumulative cost to expand each node from root to goal.
RV Institute of Technology and
Management®

• Implementation: Uses a priority queue.


• Advantages: Finds optimal solution (by least cost), complete if states are finite and no zero-weight
loops, optimal if no negative cost.
• Disadvantages: May get stuck in infinite loops (considers only cost, not number of steps)84.
• Equivalence to BFS: UCS is equivalent to BFS if all path costs are the same85.
3.6.5 Iterative Deepening Depth-First Search (IDDFS)
• Combination: Combines DFS (memory efficiency) and BFS (fast search)8687.
• Working: Gradually increases the depth limit for DLS until the goal is found8788.
• Advantages: Combines benefits of BFS and DFS, complete if branching factor is finite, optimal if path
cost is non-decreasing86.
• Applications: Widely used when search space is large and goal node depth is unknown86.
3.6.6 Bidirectional Search
• Strategy: Searches simultaneously from both initial and goal states until they meet8789.
3.7 Informed Search Algorithms (Heuristic Search)
• Contain information about how far from the goal, path cost, etc.90.
• This knowledge helps agents explore less search space and reach the goal more efficiently90.
• Uses a heuristic function that estimates how close the agent is from the goal90.
• May not always give the best solution but guarantees a good solution in reasonable time90.
3.7.2 Best-First Search Algorithm (Greedy Search)
• Combination: Combines DFS and BFS91.
• Strategy: Always selects the path that appears best at that moment91.
• Working: Uses the heuristic function to choose the most promising node (closest to the goal)91.
• Implementation: Uses a priority queue91.

--------------------------------------------------------------------------------

Chapter 4: Knowledge Representation


4.1 Introduction
• To build AI systems with conscience, knowledge needs to be inculcated in them92.
• Knowledge and Intelligence: Knowledge of the real world is crucial for intelligence and creating
intelligent AI agents92. Without knowledge, decision-makers cannot sense the environment accurately or
make appropriate decisions93.
4.2 Knowledge Representation (KR)
• Focus: How AI agents think and how their thinking contributes to intelligent behaviour94.
• Purpose: To represent real-world information in a computer-understandable way so AI can solve
complex problems (e.g., medical diagnosis, natural language communication)94.
• KR involves modelling intelligent behaviour for an agent by representing beliefs, intentions, and
judgments95.
• It goes beyond merely storing data; it facilitates machines to learn from knowledge and experiences to
behave intelligently95.
RV Institute of Technology and
Management®

4.2.1 What Knowledge Needs to be Represented? To make an AI system truly intelligent, it needs to
incorporate human-like intuition, intentions, prejudices, beliefs, judgments, common sense, and facts96:
• Object: Information and facts about relevant objects (e.g., vehicles, roads in a self-driving car)96.
• Events: Information and facts about actions occurring in the real world (e.g., applying brakes)97.
• Performance: How actions are performed (behaviour)97.
• Meta-knowledge: Knowledge about knowledge (what we know)97.
• Facts: Truths about the real world97.
• Knowledge base: Stores a group of technical sentences97.
4.2.2 What is Knowledge? Knowledge is the basic element for logical understanding, gained by
experience98. Five types of knowledge:
• Meta knowledge: Knowledge about knowledge98.
• Heuristic knowledge: Knowledge about a specific topic, often from experts, based on experience (rule
of thumb)98.
• Procedural knowledge (imperative knowledge): Information on "how to" achieve something,
including rules, strategies, procedures99.
• Declarative knowledge: Information about an object, describing concepts, facts, and attributes
(descriptive knowledge)99.
• Structural knowledge: Basic knowledge for complex problems, describing relationships between
concepts/objects (e.g., "kind of," "part of")100.
4.2.3 What is Logic?
• Logic: Main component of knowledge, facilitating drawing conclusions by filtering information100.
• In AI, knowledge is represented using logic, which has three main elements100:
◦ Syntax: Rules specifying how legal sentences in a language are constructed100.
◦ Semantics: Defines the meaning of syntactically correct sentences, relating to the real world101.
◦ Logical Inference: Deducing conclusions from facts/problems using inference algorithms101.
4.2.4 Cycle of Knowledge Representation in AI An AI system's intelligent behaviour involves several
components102:
• Perception: Retrieves data from the environment using sensors, identifies noise sources, checks damage,
defines responses102.
• Learning: Learns from data captured by perception, focusing on self-improvement through knowledge
acquisition, inference, and heuristics103.
• Knowledge Representation and Reasoning (KRR): The core component for human-like intelligence.
Defines what an agent needs to know and how automated reasoning procedures make this knowledge
available103.
• Planning and Execution: Analyses KRR. Planning selects an initial state, enumerates
preconditions/effects, and sequences actions to achieve goals. Execution performs these actions104.
4.2.5 Knowledge Representation Requirements A good KR system must have104105:
• Representational accuracy: Represents all required knowledge104.
• Inferential adequacy: Manipulates structures to produce new knowledge105.
• Inferential efficiency: Directs inference mechanism to generate appropriate results105.
RV Institute of Technology and
Management®

• Acquisitional efficiency: Easily acquires new knowledge automatically105.


4.3 Knowledge-Based Agent (KBA)
• Agents that mimic human knowledge, using knowledge and reasoning to act efficiently and make
appropriate decisions105106.
• Functions: Maintain internal state, deduce reasoning, update knowledge from observations, and take
actions106.
• Store knowledge about surroundings as "sentences" (technical facts)106....
4.3.1 The Architecture of Knowledge-Based Agent
• KBAs contain a knowledge-base (KB) and an inference engine (IE)109.
◦ Knowledge base (KB): Stores real-world facts using sentences in a knowledge representation
language109. The learning element regularly updates it107.
◦ Inference engine (IE): Infers new knowledge from old sentences and adds them to the KB, applying
logical rules (forward or backward chaining)108109.
• Process: KBA perceives environment, input goes to IE, IE interacts with KB for decisions107.
4.3.2 Operations Performed By KBA
• TELL operation: Informs the KB what knowledge it has or needs, and what action was
selected/performed108110.
• ASK operation: Queries the KB for what action to perform110.
• PERFORM operation: Executes the selected action110.
4.3.3 A Generic Knowledge-Based Agent
• Accepts environmental percepts as input, returns an action as output111.
• Maintains a knowledge base (KB) with background knowledge and a time counter111.
• Steps:
◦ TELLS KB what it perceives (using MAKE-PERCEPT-SENTENCE)111.
◦ ASKS KB what action to take (using MAKE-ACTION-QUERY)112.
◦ TELLS KB about the chosen action (using MAKE-ACTION-SENTENCE)112.
4.3.4 Various Levels of Knowledge-Based Agent
• Knowledge Level: Specifies what the agent knows and its goals, used to fix agent behaviour (e.g.,
optimum path from A to B)112113.
• Logical Level: Understands knowledge representation by encoding sentences into different logics (e.g.,
deducing logic for the path)113.
• Implementation Level: Physical representation of logic and knowledge; agent performs actions based
on knowledge from higher levels (e.g., actually moving from A to B)113.
4.3.5 Approaches to Designing a Knowledge-Based Agent
• Declarative Approach: Initializes with an empty KB and gradually adds sentences (facts) until it's
knowledgeable114115.
• Procedural Approach: Directly encodes desired behaviour into the agent as program code (e.g., in
LISP, Prolog)115.
• Hybrid Approach: Combines both, compiling declarative knowledge into efficient procedural code115.
4.4 Types of Knowledge Knowledge can be expressed in a KR system in different ways116:
RV Institute of Technology and
Management®

• Simple Relational Knowledge:


◦ Facts about objects stored systematically using relations (tables) in databases116.
◦ Shortcoming: Little opportunity for inference116.
• Inheritable Knowledge:
◦ Stores data using a hierarchy of classes (generalised to specialised)116.
◦ Shows relation between instance and class (IS-A relation)116.
• Inferential Knowledge:
◦ Represents knowledge in formal logic, used to derive more facts accurately117.
◦ Example: "Diya is a student. All students are bright." -> "Student(Diya)" and "∀x = Student (x) →
Bright (x)"117

You might also like