0% found this document useful (0 votes)
862 views42 pages

21cs502 Ai Unit-I Notes Short 42 Pges

Uploaded by

vallaturulohitha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
862 views42 pages

21cs502 Ai Unit-I Notes Short 42 Pges

Uploaded by

vallaturulohitha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 42

21CS502 ARTIFICIAL INTELLIGENCE

UNIT I
ARTIFICIAL INTELLIGENCE AND INTELLIGENT AGENTS
Introduction to AI – Foundations of Artificial Intelligence - Intelligent Agents – Agents and

Environments - Concept of rationality – Nature of environments – Structure of agents - Problem

solving agents – Example Problems - Search Algorithms – Uninformed Search Strategies

INTRODUCTION TO AI

According to John McCarthy (Father of Artificial Intelligence), Artificial Intelligence is the


science and engineering of making intelligent machines, especially intelligent computer program.

Definition

 “Artificial Intelligence is a way of making a computer, a computer-controlled robot, or


software to think intelligently, in the similar manner the intelligent humans think.”
 Artificial Intelligence is defined as the study and design of intelligent agents, where an
intelligent agent is a system that perceives the environment and takes actions.
 AI is accomplished by studying how human brain thinks, learn, decide, and work while
trying to solve a problem, and then using the outcomes of this study as a basis of
developing intelligent software and systems.

Goals of AI

 The major objective of AI is in the development of computer functions associated with


human intelligence, such as reasoning, learning, and problem solving. The main goals are:

 To Create Expert Systems − The systems which exhibit intelligent behavior, learn,
demonstrate, explain, and advice its users.
 To Implement Human Intelligence in Machines − Creating systems that understand, think,
learn, and behave like humans.

1
Four Categories of Definitions of Artificial Intelligence

1. Acting humanly: The Turing Test approach


 The Turing Test is simple method of determining whether or not a computer is
capable of acting like a human being.
 A computer passes the test if a human interrogator, after posing some questions,
cannot be able to determine whether the written responses come from a person or
from a computer.

 To pass the Turing Test, the computer would need to process the following
capabilities:
o Natural Language Processing
o Knowledge Representation
o Automated Reasoning
o Machine Learning

2
 The Total Turing Test is another step ahead of the Turing Test where a video signal
is also included so that the interrogator can test the perceptual abilities of the
interacting computer in addition to the verbal behaviors. The Total Turing Test
requires the following additional skills
o Computer Vision
o Robotics
2. Thinking humanly: The cognitive modeling approach
 Thinking humanly means trying to understand and model how the human mind works.
The three ways are:
o Through introspection — trying to catch our own thoughts as they go by;
o Through psychological experiments — observing a person in action;
o Through brain imaging — observing the brain in action.
Example:General Problem Solver(GPS) to model human thinking and check whether
it can solve problems like a person by following the same reasoning steps as a human.
The intent of the program is not just to solve the problem correctly but to go
through the same series of steps as that of a human brain to solve it.

3.Thinking rationally: The laws of thought approach

 Think rationally is the ability to think with reason and draw sensible conclusions from
facts, logic and data.
 It could be defined as thoughts based on facts and not emotions.
 Example : Syllogisms

“All TVs use energy; Energy always generates heat; therefore, all TVs generate heat.”

These arguments initiated the field called logic. Notations for statements for all kinds of objects
were developed and interrelated between them to show logic.

Logical notation:

All TV’s uses energy;

Energy always generates heat;

therefore, All TVs generate heat.

Mathematical predicate calculus notation:

Ps is the statement “All TV’s uses energy.”


∀x[Px → Qx]
Qs is the statement “All TVs generate heat.”

3
4.Acting rationally: The rational agent approach

 A traditional computer program blindly executes the code that we write. Neither it acts on its
own nor it adapts to change itself based on the outcome.
 The so-called agent program that we refer to here is expected to do more than the traditional
computer program. It is expected to create and pursue the goal, change state, and operate
autonomously.
 An agent is a computer program that has the following attributes:

o Autonomous control
o Perceive environment
o Adapt to change
o Act and pursue goals

The rational agent approach to AI has a couple of advantage over other approaches:

1. A correct inference is considered a possible way to achieve rationality but is not always
required to achieve rationality.
2. It is a more manageable scientific approach to define rationality than others that are based
on human behavior or human thought.

….………………………………………………………………………………………………………………………..
FOUNDATIONS OF ARTIFICIAL INTELLIGENCE

4
Advantages of Artificial Intelligence
 High Accuracy with less errors

 High-Speed

 High reliability

 Useful for risky areas

 Digital Assistant

 Useful as a public utility

Disadvantages of Artificial Intelligence


o High Cost

o Can't think out of the box

o No feelings and emotions

o Increase dependency on machines

o No Original Creativity

Future of Artificial Intelligence


History of AI

 The Gestation of Artificial Intelligence (1943 – 1955)

 Artificial neurons and Computing Machinery and Intelligence were proposed.

5
 Alan Turing proposed a test to check the machine’s ability to exhibit intelligent behavior
equivalent to human.

 The birth of Artificial Intelligence (1956)

 John McCarthy coined the word Artificial Intelligence at the Dartmouth Conference

 Early enthusiasm, Great expectations (1952 - 1969)

 Researchers emphasized developing algorithms which can solve mathematical problems.


 General Problem Solver (GPS) was created in 1957 by Allen Newell and Herbert Simon to
build a Universal Problem Solver Machine.

 A dose of reality (1966–1973)

 Wabot-1, the first humanoid robot was built in Japan

 Knowledge based systems: The key to power (1969-1979)

 Feigenbaum and others at Stanford began the Heuristic Programming Project (HPP) to
investigate the extent to which the new methodology of expert systems could be applied to
other areas of human expertise.
 Expert systems were programmed that emulate the decision-making ability of a human
expert.

 A boom of AI (1980-1987)

 Expert systems were programmed that emulate the decision-making ability of a human
expert.

 The emergence of intelligent agents (1993-2011)

 In the year 1997, IBM Deep Blue became the first computer program to beat a world chess
champion.
 In the year 2002, AI entered home in the form of a vacuum cleaner.
 In the year 2006, AI entered business world. Companies like Facebook, Twitter and Amazon
also started using AI.

Future AI

Artificial Intelligence will continue to act as a technological innovator in future. Specialized AI


applications will become both increasingly common in future. Some of the AI applications are:

6
 Robotic Vehicles
 Driverless Cars
 Autonomous Planning and Scheduling
 Speech Recognition
 Games
 Spam Fighting
 Logistics Planning
 Robotics etc.,

Programming without AI vs Programming with AI


Programming without AI Programming with AI
Can answer the specific questions it is meant Can answer the generic questions it is meant
to solve. to solve.
Modification in the program leads to change AI programs can absorb new modifications by
in its structure. putting highly independent pieces of
information together.
Modification is not quick and easy. Modification is quick and easy.
 Weak AI vs. Strong AI
Weak AI Strong AI
Created to perform only a specific task. Not Possess general purpose intelligence that can
used for general purpose demonstrate human abilities
Simpler to create Difficult to create
Do not have mind on their own Can exhibit strong human cognitive abilities
Alexa, Siri, Cortana etc., are examples of Strong AI is a hypothetical concept. It does
Weak AI not exist yet in its true form.

INTELLIGENT AGENTS
 AGENTS and ENVIRONMENTS

What is an Agent?

An agent is anything that can be viewed as

 Perceiving its environment through sensors and


 Acting upon that environment through actuators/effectors.

7
An Agent can be one of the following:

Agent

Human Agent Robotic Agent Software


Agent
Sensors: eyes, ears Sensors: Cameras, Infrared finder Sensors: Keystrokes
Actuators: hands, legs Actuators: various motors Actuators: Displaying output

On screen
AI TERMINOLOGY

1) PERCEPT

the term percept refers to the agent’s perceptual inputs at any given instant

examples

1. A human agent percepts “birds fying in the sky


“ through eyes and takes it snap (photograph).
2. A robotic agent perceive “temperature of a boiler”
through cameras and takes control action
2) PERCEPT SEQUENCE

Structure of an AI Agent

Agent = Architecture + Agent Program

 Architecture – Machinery that an AI agent executes on


 Agent Program – Implementation of Agent function
 Agent Function – Maps Percept sequence to an action

8
Example

A vacuum cleaner world with two locations

Agent Function for Vacuum cleaner

Percepts: Location, Contents

Actions: Left, Right, Suck

Agent Program for Vacuum cleaner

 THE CONCEPT OF RATIONALITY

A rational agent is one that does the right thing. It is the ability of a system for being reasonable,
sensible and having good sense of judgment.

Rationality depends on FOUR parameters:

 The performance measure that defines the criterion of success.


 The agent’s prior knowledge of the environment.
 The actions that the agent can perform.
 The agent’s percept sequence.
OMNISCIENCE, LEARNING AND AUTONOMY
An Omniscient agent knows the actual outcome of its actions and can act accordingly. It is
impossible in reality.

9
NATURE OF ENVIRONMENTS

 An environment in artificial intelligence is the surrounding of the agent. The agent takes input
from the environment through sensors and delivers the output to the environment through
actuators.

 PEAS DESCRIPTION:

 To design a rational agent, we must specify the task environment called as PEAS.
o P - Performance
o E – Environment
o A – Actuators
o S – Sensors
 The PEAS description for various agents are as follows:

Automated Taxi Safe, Fast, Roads, Steering, Brake, GPS, Camera,


Driver Legal, Comfort, Pedastrians and Horn, Speedometer,
Maximize profits other traffic Accelerator Engine Sensors
etc.,

10
 TYPES OF TASK ENVIRONMENT / PROPERTIES OF TASK ENVIRONMENT

1. Fully Observable vs. Partially Observable

 Fully observable : An agent can sense or access the complete environment


o Example : Chess
 Partially Observable : An agent can sense or access only a part of the environment
o Example : Cards
2. Deterministic vs. Stochastic
 Deterministic: Agent’s current state completely determines the next state of the agent.
o Example : Tic Tac Toe
 Stochastic: The state of the agent is random in nature and cannot be completely
determined by the agent.
o Example : Ludo
3. Episodic vs. Sequential
 Episodic : Agent’s action is divided into atomic incidents i.e. No dependency between
current and previous incidents
o Example: Part Picking Robot
 Sequential: Agent’s previous actions affect all future decisions.
o Example: Chess
4. Single Agent vs. Multi Agent
 Single Agent: The environment involves only one agent
o Example: Maze
 Multi Agent: The environment involves multiple agents
o Example: Cricket
5. Static vs. Dynamic
 Static : The environment does not change while the agent is acting
o Example: Crossword Puzzle
 Dynamic : The environment keeps on changing while the agent is acting..
o Example: Taxi Driving

11
6. Discrete vs. Continuous
 Discrete : The environment consists of finite number of percepts and actions
o Example : Chess
 Continuous : The environment in which the actions are performed cannot be numbered/
o Example: Self Driving Cars
7. Known vs. Unknown
 Known : The result for all actions are known
o Example: Cards
 Unknown: The agent make a decision and act by learning how it works
o Example: Video Games

TYPICAL INTELLIGENT AGENTS / TYPES OF INTELLIGENT AGENTS/


STRUCTURE OF AGENTS

The Structure of Intelligent Agents


 Agent’s structure can be viewed as

Agent = Architecture + Agent Program

 Architecture = the machinery that an agent executes on.


 Agent Program = an implementation of an agent function.

Types of Intelligent Agents

12
1. SIMPLE REFLEX AGENTS

 They choose actions only based on the current percept.


 They are rational only if a correct decision is made only on the basis of current
precept.
 Their environment is completely observable.
 Condition-Action Rule − It is a rule that maps a state (condition) to an action

2. MODEL BASED REFLEX AGENTS

 They use a model of the world to choose their actions. They maintain an internal
state.
o Model − knowledge about “how the things happen in the world”.
o Internal State − It is a representation of unobserved aspects of current
state depending on percept history.
 Updating the state requires the information about −
o How the world evolves.
o How the agent’s actions affect the world.

13
3. GOAL BASED AGENTS
 They choose their actions in order to achieve goals.
 Goal-based approach is more flexible than reflex agent since the knowledge
supporting a decision is explicitly modeled, thereby allowing for modifications.
 Goal − It is the description of desirable situations.

4. UTILITY BASED AGENTS


 They choose actions based on a preference (utility) for each state.
 Goals are inadequate when – There are conflicting goals, out of which only few can
be achieved.
 Goals have some uncertainty of being achieved and you need to weigh likelihood of
success against the importance of a goal.

14
5. LEARNING AGENT
 A learning agent in AI is the type of agent that can learn from its past experiences,
or has learning capabilities.
 It starts to act with basic knowledge and then is able to act and adapt automatically
through learning.
 A learning agent has mainly four conceptual components, which are:
1. Learning element: It is responsible for making improvements by learning
from the environment
2. Critic: The learning element takes feedback from the critic which describes
how well the agent is doing with respect to a fixed performance standard.
3. Performance element: It is responsible for selecting external action
4. Problem generator: This component is responsible for suggesting actions that
will lead to new and informative experiences.
 Hence, learning agents are able to learn, analyze performance, and look for new
ways to improve the performance

15
PROBLEM-SOLVING AGENTS

 The problem-solving agent performs precisely by defining problems and its several solutions.
 A problem-solving agent is a goal-driven agent and focuses on satisfying the goal.

Steps performed by a Problem-Solving Agent

Goal Formulation: It is the first and simplest step in problem-solving. It organizes the
steps/sequence required to formulate one goal out of multiple goals as well as actions to achieve
that goal. Goal formulation is based on the current situation and the agent’s performance measure.

Problem Formulation: It is the most important step of problem-solving which decides what actions
should be taken to achieve the formulated goal. There are following five components involved in
problem formulation:

Initial State: It is the starting state or initial step of the agent towards its goal.

Actions: It is the description of the possible actions available to the agent.

Transition Model: It describes what each action does.

Goal Test: It determines if the given state is a goal state.

Path cost: It assigns a numeric cost to each path that follows the goal. The problem-solving agent
selects a cost function, which reflects its performance measure. Remember, an optimal solution
has the lowest path cost among all the solutions.

State Space: Initial state, actions, and transition model together define the state-space of the
problem implicitly. State-space of a problem is a set of all states which can be reached from the
initial state followed by any sequence of actions. The state-space forms a directed map or graph
where nodes are the states, links between the nodes are actions, and the path is a sequence of
states connected by the sequence of actions.

Search: It identifies all the best possible sequence of actions to reach the goal state from the
current state. It takes a problem as an input and returns solution as its output.

Solution: It finds the best algorithm out of various algorithms, which may be proven as the best
optimal solution.

Execution: It executes the best optimal solution from the searching algorithms to reach the goal
state from the current state.

16
PROBLEM SOLVING APPROACH TO TYPICAL AI PROBLEMS
 Problems are the issues which come across any system. A solution is needed to solve that particular
problem.

Steps to solve a problem using Artificial Intelligence:

The process of solving a problem consists of five steps. They are:

1. Defining the Problem:


 The definition of the problem must be included precisely. It should contain the possible
initial as well as final situations which should result in acceptable solution.
2. Analysing the Problem:
 Analysing the problem and its requirement must be done as few features can have immense
impact on the resulting solution.
3. Identification of Solutions:
 This phase generates reasonable amount of solutions to the given problem in a particular
range.
4. Choosing a Solution:
 From all the identified solutions, the best solution is chosen basis on the results produced
by respective solutions.
5. Implementation:
 After choosing the best solution, its implementation is done.

17
Two types of Problem Approaches:

1. Toy Problems
 It is a concise and exact description of the problem which is used by the
researchers to compare the performance of algorithms.
 Examples : 8 puzzle problem, N queens problem, Tic Tac Toe and Towers of
Hanoi
2. Real World Problems
 It is real-world based problems which require solutions.
 Unlike a toy problem, it does not depend on descriptions, but we can have a
general formulation of the problem.
 Example: Route finding problem, Travelling Salesman Problem

EXAMPLE PROBLEMS

8 PUZZLE PROBLEM

In 8 puzzle problem is played on a 3-by-3 grid with

 8 square blocks labeled 1 through 8


 One blank square.

Our objective is to convert the current (Start) state into goal state by sliding digits into the
blank space.

The problem formulation is as follows:

States: It describes the location of each numbered tiles and the blank tile.

Initial State: We can start from any state as the initial state.

Actions: Here, actions of the blank space is defined, i.e., either left, right, up or down

18
Transition Model: It returns the resulting state as per the given state and actions.

Goal test: It identifies whether we have reached the correct goal-state.

Path cost: The path cost is the number of steps in the path where the cost of each step is 1.

Solution:

N QUEENS PROBLEM

 N - Queens problem is to place n - queens in such a manner on an n x n chessboard that no queens


attack each other by being in the same row, column or diagonal.

Initial State: No queens on the board

States: Any arrangement of n queens on the board

Actions: Place ‘n’ queens on n x n chessboard so that no two queens attack each other. Two Queens are
said to attack each other if they are placed in same row/column/diagonal.

Goal Test: Identify if all n queens are placed on the board such that no queens attack each other

Path Cost: Each step costs 1, so that path cost is the number of steps in the path to reach the goal

Solution:

 If n =1, the problem has a trivial solution, and no solution exists.


19
 If n =2 and n =3, no solution exists.
 Now, we will consider the 4 queens problem.
 If n=4, the chessboard has 4 rows and 4 columns.

4x4 chessboard

 Since, we have to place 4 queens such as q1, q2, q3 and q4 on the chessboard, such that no two
queens attack each other. In such a conditional each queen must be placed on a different row,
i.e., we put queen "i" on row "i."
 Backtracking method is used to generate the necessary solution. If the rule is violated and there
is no chance of placing a queen, backtracking is done and alternate possibilities are explored.

State Space Tree for 4 Queens Problem:

 State Space Tree represents represents all the possible states of the problem.

20
SOLUTION FOR 4 QUEENS PROBLEM USING
BACKTRACKING

21
WATER JUG PROBLEM
Problem:
Given two jugs, a 4-gallon one and a 3-gallon one, a pump which has unlimited water
which you can use to fill the jug, and the ground on which water may be poured.
Neither jug has any measuring markings on it.
How can you get exactly 2 gallons of water in the 4-gallon jug?

Solution:
State Representation and Initial State : We will represent a state of the problem as a tuple
(x, y) where x represents the amount of water in the 4-gallon jug and y represents the
amount of water in the 3-gallon jug. Note 0 < x < 4, and 0 < y < 3.
 Our initial state: (0,0)
 Goal state: (2,y) where 0 <y <3.
 Operators :We must design a set of operators that will take us from one state to
another:
1. Empty a jug
2. Fill a jug
3. Pour some water from one jug to another
Production Rules:

Rule Initial Condition Final Description of the action


State State taken
1 (x,y) If x<4 (4,y) Fill the 4 gallon jug
completely
2 (x,y) If y<3 (x,3) Fill the 3 gallon jug
completely
3 (x,y) If x>0 (x-d,y) Pour some water from
the 4 gallon jug
4 (x,y) If y>0 (x,y-d) Pour some water from 3
gallon jug
5 (x,y) If x>0 (0,y) Empty the 4 gallon jug
6 (x,y) If y>0 (x,0) Empty the 3 gallon jug
7 (x,y) If (x+y)<7 (4, y-[4-x]) Pour some water from
the 3 gallon jug to fill the
4 gallon jug
8 (x,y) If (x+y)<7 (x-[3-y], 3) Pour some water from
the 4 gallon jug to fill the
3 gallon jug
9 (x,y) If (x+y)<4 (x+y, 0) Pour all water from 3
gallon jug to the 4 gallon
jug
10 (x,y) If (x+y)<3 (0, x+y) Pour all water from 4
gallon jug to the 3 gallon
jug

22
One Possible Solution for Water Jug Problem

S.No Gallons Gallons Rule Applied


in 4-gal in 3-gal
jug (x) jug (y)

0 0 Initial state
1 0 3 Fill 3 gal jug (Rule 2)
2 3 0 Pour all water from 3 gal jug to 4 gal jug(Rule 9)
3 3 3 Fill 3 gal jug (Rule 2)
4 4 2 Pour some water from the 3 gallon jug to
fill the 4 gallon jug (Rule 7)
5 0 2 Empty the 4 gallon jug (Rule 5)
6 2 0 Pour all water from 3 gal jug to 4 gal
jug(Rule 9)

On reaching 6th attempt, the goal state is reached

VACCUM CLEANER PROBLEM

Problem Formulation

• States: The state is determined by both the agent location and the dirt locations. The
agent is in one of two locations, each of which might or might not contain dirt. Thus, there
are 2 × 22 = 8 possible world states.
23
• Initial state: Any state can be designated as the initial state.
• Actions: In this simple environment, each state has just three actions: Left, Right, and
Suck.
•Transition model: The actions have their expected effects, except that moving Left in the
leftmost square, moving Right in the rightmost square, and Sucking in a clean square have
no effect.
• Goal test: This checks whether all the squares are clean.
• Path cost: Each step costs 1, so the path cost is the number of steps in the path.

24
SEARCH ALGORITHMS
Search: Search is the process of examination of states to find the path from the initial state/root
state to goal state. A search problem can have three main factors:

1. Search Space: Search space represents a set of possible solutions, which a system
may have.

2. Start State: It is a state from where agent begins the search.


3. Goal test: It is a function which observe the current state and returns whether the
goal state is achieved or not.

Difference Between Uninformed Search and Informed Search


Uninformed Search Informed Search
Search without information Search with information (heuristic)
No knowledge is used Use Knowledge to find steps to the solution
Time Consuming Quick Solution
High Space and Time Complexity Less Space and Time Complexity
Examples: Examples:
 Breadth First Search  A* Search
 Depth First Search  Best First Search
 Depth Limited Search  Hill Climbing Algorithm

25
 Iterative Deepening Depth First Search
 Uniform Cost Search
 Bidirectional Search

Measuring Problem Solving Performance:

• Completeness: Defines whether the algorithm guaranteed to find a solution when there is one.

• Optimality: Defines whether the strategy find the optimal solution.

• Time complexity: Defines how long does it take to find a solution.

• Space complexity: Defines how much memory is needed to perform the search.

UNINFORMED SEARCH STRATEGIES

 This is also called blind search.


 The term means that the strategies have no additional information about states beyond that
provided in the problem definition. All they can do is generate successors and distinguish a
goal state from a non-goal state.
 All search strategies are distinguished by the order in which nodes are expanded

(1) Breadth-first search


 Breadth-first search is a simple strategy in which the root node is expanded first, then all
the successors of the root node are expanded next, then their successors, and so on.
 In general, all the nodes are expanded at a given depth in the search tree before any nodes
at the next level are expanded.
 Implemented using Queue.

Order of Traversal:

 BFS traverses as follows

26
Example:

Which solution would BFS find to move from node S to node G if run on the graph below?

Solution. The equivalent search tree for the above graph is as follows. As BFS traverses the tree
“shallowest node first”, it would always pick the shallower branch until it reaches the solution (or
it runs out of nodes, and goes to the next branch). The optimized traversal is shown in blue arrows

Performance Evaluation of BFS:

1. Completeness: Yes. BFS is complete as it gives solution


2. Optimal : Yields optimal solution only when all nodes have the same cost
3. Time Complexity: O(bd+1) , b – no. of nodes, d- depth of the tree
4. Space Complexity: O(bd+1) , b – no. of nodes, d- depth of the tree

27
Breadth First Search - Pseudocode :

BreadthFirstSearch(graph, start):

queue = Queue() // Create an empty queue

visited = Set() // Create an empty set to track visited nodes

queue.enqueue(start) // Enqueue the starting node

visited.add(start) // Mark the starting node as visited

while queue is not empty:

current = queue.dequeue() // Dequeue a node from the queue

for neighbor in graph[current]:

if neighbor not in visited:

queue.enqueue(neighbor) //Enqueue unvisited neighbors

visited.add(neighbor) // Mark neighbor as visited

(2) Depth First Search

 Depth-first search always expands the deepest node in the current frontier of the search
tree.
 The search proceeds immediately to the deepest level of the search tree, where the nodes
have no successors.
 As those nodes are expanded, they are dropped from the frontier, so then the search backs
ups to the next deepest node that still has unexplored successors.
 Implemented using Stack.

Order of Traversal:

DFS traverses as follows

28
Example:

Which solution would DFS find to move from node S to node G if run on the graph below?

Solution:

The equivalent search tree for the above graph is as follows. As DFS traverses the tree “deepest
node first”, it would always pick the deeper branch until it reaches the solution (or it runs out of
nodes, and goes to the next branch). The traversal is shown in blue arrows.

Solution:

Performance Evaluation of DFS:

1. Completeness: Yes. DFS is complete as it gives solution


2. Optimal : Yields optimal solution only when all nodes have the same cost
3. Time Complexity: O(bd) , b – no. of nodes, d- depth of the tree
4. Space Complexity: O(bd) , b – no. of nodes, d- depth of the tree

29
Depth First Search - Pseudocode:

DepthFirstSearch(graph, start):

stack = Stack() // Create an empty stack

visited = Set() // Create an empty set to track visited nodes

stack.push(start) // Push the starting node onto the stack

while stack is not empty:

current = stack.pop() // Pop a node from the stack

if current is not in visited:

visited.add(current) // Mark the current node as visited

for neighbor in graph[current]:

if neighbor not in visited:

stack.push(neighbor) // Push unvisited neighbors onto the stack

(3) Depth Limited Search

 Similar to DFS with a predetermined limit. Solves the drawback of infinite path in DFS
 In this technique, the node at the depth limit will be treated as it has no successor nodes
further.
 When goal is reached, the algorithm terminates.
 Sometimes, DLS may be incomplete because, the goal node may not occur within the given
depth limit.

Example:

Initial Node –S, Goal Node – J, Depth Limit – Level 2

Path : S -> A -> C -> D ->B ->I -> J

30
Performance Evaluation of DLS:

1. Completeness: The solution may or may not be obtained. So DLS is not Complete Search.
2. Optimal : Not Optimal
3. Time Complexity: O(bl) , b – no. of nodes, l- depth of the tree(given)
4. Space Complexity: O(bl) , b – no. of nodes, l- depth of the tree(given)

Depth Limited Search - Pseudocode:

depth_limit=d

DepthLimitedSearch(graph, start):

stack = Stack() // Create an empty stack

visited = Set() // Create an empty set to track visited nodes

stack.push(start) // Push the starting node onto the stack

while stack is not empty:

current = stack.pop() // Pop a node from the stack

if current is not in visited:

visited.add(current) // Mark the current node as visited

for neighbor in graph[current]:

if neighbor not in visited and depth_limit<d:

stack.push(neighbor) // Push unvisited neighbors onto the stack

(4) Iterative Deepening Depth First Search

 It is a combination of DFS and BFS algorithm.


 Performs DFS iteratively.
 DFS is run iteratively with increase in depth limit until goal is found.
 Advantage : Fast and Less Memory

31
Example:

Iteration 1:

Iteration 2:

Iteration 3:

32
Performance Evaluation of Iterative Deepening DFS:

1. Completeness: Yes, IDDFS is complete


2. Optimal : It does not give optimal solution as the algorithm terminates when the first goal
node is reached
3. Time Complexity: O(bd) , b – no. of nodes, d- depth of the tree
4. Space Complexity: O(bd) , b – no. of nodes, d- depth of the tree

Iterative Deepening Depth First Search - Pseudocode:

IterativeDeepeningDFS(graph,start):

for depth = 0 to do

result <-- DepthLimitedSearch(graph,start)

If result≠ cutoff then return result

Else return “Goal not found within max depth”

(5) Uniform Cost Search

 Used to find a path to the goal node with lowest cost.


 Priority Queue is used for implementation.
 High priority to minimum cost

Example:

Which solution would UCS find to move from node S to node G if run on the graph below?

33
Solution:

The equivalent search tree for the above graph is as follows. Cost of each node is the cumulative
cost of reaching that node from the root. Based on UCS strategy, the path with least cumulative
cost is chosen.

Path: S -> A -> B -> G

Performance Evaluation of Uniform Cost Search:

1. Completeness: Yes. Uniform cost search is complete as it gives solution


2. Optimal : Yields optimal solution as we traverse based on minimum cost
3. Time Complexity: O(bc/e)
4. Space Complexity: O(bc/e), where b – no. of nodes, c- cost of optimal solution, e-least cost
of an edge.

Uniform Cost Search - Pseudocode:

UniformCostSearch(graph, start, goal):

priority_queue = PriorityQueue() // Create a priority queue based on path cost

visited = Set() // Create an empty set to track visited nodes

priority_queue.enqueue((start, 0)) // Enqueue the starting node with a cost of 0

34
while priority_queue is not empty:

current, path_cost = priority_queue.dequeue() // Dequeue a node and its cost

if current is the goal:

return "Goal found"

if current not in visited:

visited.add(current) // Mark the current node as visited

for neighbor, cost in graph[current]:

if neighbor not in visited:

priority_queue.enqueue((neighbor, path_cost + cost)) // Enqueue neighbors with


updated cost

return "Goal not found"

(6) Bidirectional Search

 The idea behind bidirectional search is to run two simultaneous searches


o Forward Search : starts from the initial state to the goal
o Backward Search : starts from goal state to initial state
 The two searches stops when both searches intersect with each other.

Example:

35
Performance Evaluation of Bidirectional Search:

1. Completeness: Yes, Bidirectional Search is complete


2. Optimal : Yes, Birectional Search provides optimal solution
3. Time Complexity: O(bd/2) , b – no. of nodes, d- depth of the tree
4. Space Complexity: O(bd/2) , b – no. of nodes, d- depth of the tree

Bidirectional Search - Pseudocode:

BidirectionalSearch(graph, start, goal):

forward_visited = Set() // Create a set for forward visited nodes

backward_visited = Set() // Create a set for backward visited nodes

forward_queue = Queue() // Create a queue for forward search

backward_queue = Queue() // Create a queue for backward search

forward_queue.enqueue(start) // Enqueue start node to forward queue

backward_queue.enqueue(goal) // Enqueue goal node to backward queue

while forward_queue is not empty and backward_queue is not empty:

forward_current = forward_queue.dequeue()

if forward_current in backward_visited:

return "Meeting point found" // Meeting point found

backward_current = backward_queue.dequeue()

if backward_current in forward_visited:

return "Meeting point found" // Meeting point found

forward_visited.add(forward_current)

backward_visited.add(backward_current)

return "No meeting point found"

36
PART - A

(1) Define Artificial Intelligence (AI).


 Artificial Intelligence is the study of how to make computers do things at which at the moment,
people are better.

 “Artificial Intelligence is a way of making a computer, a computer-controlled robot, or


software to think intelligently, in the similar manner the intelligent humans think.”

 Artificial Intelligence is defined as the study and design of intelligent agents, where an
intelligent agent is a system that perceives the environment and takes actions.

(2) Differentiate between Intelligence and Artificial Intelligence.

INTELLIGENCE ARTIFICIAL INTELLIGENCE

It is a natural process. It is programmed by humans.

It is hereditary. It is not hereditary.

Knowledge is required for intelligence. Knowledge Base and electricity are required to
generate output.

No human is an expert. We may get better s Expert systems are made which aggregate many
olutions from other humans. person’s experience and ideas.

(3) Is AI a science, or is it engineering? Or neither or both? Explain.


 Artificial Intelligence is most certainly a science. But it would be nothing with engineering.
Computer Scientists need somewhere to place their programs, such as computers, servers,
robots, cars, etc.

 But without engineers they would have no outlet to test their Artificial Intelligence on.
Science and Engineering go hand in hand, they both benefit each other. While the engineers
build the machines, the scientists are writing code for their AI.

(4) Explain why problem formulation must follow goal formulation.


 Goal formulation is used to steer the agent in the right direction, thus ignoring any redundant
actions. Problem formulation must follow this because it is based off of Goal Formulation.

 It is the process of deciding what actions and states to consider given a certain goal. A goal
may be set in stone, but how you achieve it can vary.

 Usualy the most optimal way is chosen though.

37
(5) Define Artificial Intelligence in terms of rational acting.
 A field of study that seeks to explain and emulate intelligent behaviors in terms of
computational processes.

 The branch of computer science that is concerned with the automation of


intelligent behavior

(6) Define Artificial in terms of rational thinking.


 The study of mental faculties through the use of computational models.

 The study of the computations that make it possible to perceive, reason and act

(7) What is meant by Turing test?


 A Turing Test is a method of inquiry in artificial intelligence (AI) for determining whether or
not a computer is capable of thinking like a human being. The test is named after Alan Turing.

 To conduct this test we need two people and one machine.

 One person will be an interrogator ( i.e.) questioner, will be asking questions to one person
and one machine.

 Three of them will be in a separate room.

 Interrogator knows them just as A and B. so it has to identify which is the person and machine.

 The goal of the machine is to make Interrogator believe that it is the person’s answer.

 If machine succeeds by fooling Interrogator, the machine acts like a human. Programming a
computer to pass Turing test is very difficult.

(8) What are the capabilities, computer should possess to pass Turing test?
 Natural language processing -to enable it to communicate successfully in English

 Knowledge Representation - to store what it knows or hears automated reasoning - to use the
stored information to answer questions and to draw new conclusions

 Machine Learning - to adapt to new circumstances and to detect and extrapolate patterns.

(9) What is meant by Total Turing Test ?


 The test which includes a video signals so that the interrogator can test the perceptual
abilities of the machine is termed Total turing test.

(10) What are the capabilities computers needs to pass total Turing test?
 Computer vision - to perceive objects.

 Robotics - to manipulate objects and move about.

(11) Define an agent.


 An agent is anything that can be viewed as perceiving its environment through sensors and
acting upon that environment through actuators.

38
(12) What is an agent function? Differentiate an agent function and an agent program.
 An agent’s behavior is described by the agent function that maps any given percept sequence
to an action.

 AGENT FUNCTION - An abstract mathematical description

 AGENT PROGRAM - A concrete implementation, running on the agent Architecture.

(13) What are the factors that a rational agent should depend on at any given time?
 The performance measure that defines degree of success.

 Ever thing that the agent has perceived so far. We wil cal this complete perceptual history
the percept sequence.

 The action that the agent can perform.

(14) Define an Omniscient agent.


 An omniscient agent knows the actual outcome of its action and can act accordingly; but
omniscience is impossible in reality.

(15) Give the structure of agent in an environment


 Agent interacts with environment through sensors and actuators.

 An Agent is anything that can be viewed as perceiving (i.e.) understanding its environment
through sensors and acting upon that environment through actuators.

39
(16) Define Ideal Rational Agent
 For each possible percept sequence, an ideal rational agent should do whatever action is
expected to maximize its performance measure on the basis of the evidence provided by the
percept sequence & whatever built in knowledge that the agent has.

(17) Why are condition-action rules important in the design of an agent?


 Rules are used to represent relationships. Rule-based knowledge representation employs IF
condition (premise antecedent) THEN action statements. (goal consequent)

 For example: IF the heating element glows AND the bread is always dark THEN the toaster
thermostat is broken

 When the problem situation matches the IF part of a rule, the action specified by the THEN
part of the rule is performed.

(18) List down the characteristics of intelligent agent.


 Internal characteristics are

 Learning/reasoning

 Reactivity

 Autonomy

 Goal-oriented

 External characteristics are

 Communication

 Cooperation

 Mobility

 Character

(19) What is the use of online search agents in unknown environment?


 Online search agents operate by interleaving computation and action: first it takes an action,
and then it observes the environment and computes the next action.

 Online search is a good idea in dynamic or semi dynamic domains and stochastic domains.
Online search is a necessary idea for an exploration problem, where the states and actions are
unknown to the agent.

(20) Define operationalization.


 The process of creating a formal description of a problem using the

 knowledge about the problem, so as to create a program for solving a problem is caled as
operationalization.

40
(21) List the major components in problem formulation in AI.
The four components are:

1. Initial state.

2. State Space

3. Goal Test and

4. Path Cost

(22) What can AI do today?


 Autonomous Planning and Scheduling Game Planning

 Autonomous Control

 Diagnosis, Logistics Planning, Robotics

 Virual Assistance etc.,

(23) What is a task environment? How it is specified?


 Task environments are essentially the "problems" to which rational agents are the "solutions".

 A Task environment is specified using PEAS (Performance, Environment, Actuators, and


Sensors) description

 Performance measure – evaluates the behaviour of the agent in an environment. Environment


- Set of students testing Agency Actuators - Display exercises suggestions, corrections. Sensors
- Keyboard entry

(24) List the properties of task environments.


 Fuly observable vs. partially observable.

 Deterministic vs. stochastic.

 Episodic vs sequential

 Static vs dynamic.

 Discrete vs. continuous.

 Single agent vs. multiagent.

 Known vs. Unknown

(25) What are the different kinds of agent programs?


 Simple reflex agents

 Model-based reflex agents

 Goal-based agents

 Utility-based agents

 Learning agent

41
(26) What are utility based agents?
 Goals alone are not really enough to generate high-quality behavior in most environments.

 For example, there are many action sequences that will get the taxi to its destination (thereby
achieving the goal) but some are quicker, safer, more reliable, or cheaper than others.

 A utility function maps a state (or a sequence of states) onto a real number, which describes
the associated degree of happiness.

(27) What are the three classes of problem?


 Ignorable, in which solution steps can be ignored.

 Recoverable, in which solution steps can be undone.

 Irrecoverable, in which solution steps cannot be undone.

(28) How can the performance of an agent are improved?


 Performance of any agent can be improved by learning. Learning is a process that improves
the knowledge of an Artificial intel igent program by making observations about its
environment.

(29) What are the steps involved to solve a problem in AI?


 Define the problem (Define a state space- Identify Initial State-Specify goal states-Specify set
of rules.)

 Analyze the problem

 Identify the possible solutions

 Choose the best solution

 Implement the best solution

(30) List some of the uninformed search techniques.


 Breadth-first Search

 Depth-first Search

 Depth Limited Search

 Iterative Deepening Depth First Search

 Uniform Cost Search

 Bidirectional Search

42

You might also like