AI PROGRAMMING
LANGUAGES
A number of programming languages exist that are used
to build AI systems. General programming languages like
c++ and java are often used because these are the
languages with which most computer scientists have
experience.
There also exist two programming languages that have
features that make them particularly useful for
programming AI projects: PROLOG and LISP.
PROLOG:
PROLOG(Programming In Logic) is a language designed
to enable programmers to build a database of facts and
rules and then to have a system answer questions by a
process of logical deduction using the facts and rules in
database.
Facts entered into a PROLOG database might look as:
tasty(cheese).
made_from(cheese, milk).
contains(milk,calcium).
These facts can be expressed as the following English
statements:
Cheese is tasty.
Cheese is made from milk.
Milk contains calcium.
LISP:
LISP(List Programming) is a language that more closely
resembles the imperative programming languages such
as c++ and pascal than does PROLOG. As its name
suggests, LISP is handling of lists of data. A list in LISP is
contained within brackets such as:
[A B C]
There is a list of 3 items.
A program in LISP can be treated as data. This introduces
the possibility of writing self-modifying programs in LISP.
LISP is far more complex language syntactically than
PROLOG.
AGENT AND ENVIRONMENT
An AI system is composed of an agent and its
environment. The agents act in their environment.
An agent is anything that can perceive its environment
through sensors and acts upon that environment through
effectors.
-> Human agent: It has sensory organs such as eyes, ears,
nose, tongue and skin parallel to sensors and other
organs such as hands, legs, mouth for effectors.
Robotic Agent: It replaces cameras and infrared range
finders for sensors and various motors and actuators for
effectors.
Software Agent: It has encoded bit strings as its
programs and actions.
Intelligent Agent:
An intelligent agent is an AI hardware and/or software
with some degree of autonomy and the capacity to make
decisions and take actions.
Intelligent agents are more advanced than conventional
agents.
Agent Terminology:
-> performance measure of agent: It is the criteria which
determines how successful an agent is.
Behavior of agent: It is the action that agents performs
after any given sequence of percepts.
Percept: It is agent’s perceptual inputs at a given
instance.
Percept sequence: It is the history of all that an agent
has perceived till date.
Agent function: It is a map from the percept sequence to
an action.
Rationality:
It is nothing but status of being reasonable, sensible and
having good sense of judgement.
It is concerned with expected actions and results
depending upon what the agent has perceived.
Performing actions with the sim of obtaining useful
information is an important part of rationality.
Ideal Rational Agent:
An ideal rational agent is the one, who is capable of doing
expected actions to maximize its performance measure
on the basis of:
->Its percept sequence
->Its built-in knowledge base
Rationality of an agent depends on the following four
factors:
The performance measures
Agent’s percept sequence till now.
The agent’s prior knowledge about the environment.
The actions that the agent can carry out.
A rational agent always performs right action, where the
right action means the action that causes the agent to be
most successful in the given percept sequence.
The problem the agent solves is characterized by
performance measure, Environment, Actuators and
Sensors(PEAS).
TYPES OF AGENTS
Agents can be grouped into four classes based on their
degree of perceived intelligence and capability:
-> Simple Reflex Agents
-> Model-Based Reflex Agents
-> Goal-Based Agents
-> Utility-Based Agents
Simple reflex agents:
These agents ignore the rest of the percept history and
act only on the basis of the current percept. The agent
function is based on condition-action rule. A condition-
action rule is a rule that maps a state i.e. condition to an
action. If the condition is true, then the action is taken,
else not taken. This agent function only succeeds when
the environment is fully observable.
For simple reflex agents operating in partially observable
environments, infinite loops are often unavoidable.
Problems :
Very limited intelligence.
No knowledge of non-perceptual parts of state.
Usually too big to generate and store.
If there occurs any change in the environment, then the
collection of rules need to be updated.
Model-based reflex agents:
It works by finding a rule whose condition matches the
current situation. This agent can handle partially
observable environments by use of model about the
world. The agent has to keep track of internal state
which is adjusted by each percept and that depends on
the percept history. The current state is stored inside
the agent which maintains some kind of structure
describing the part of the world which cannot be seen.
Updating the state requires the information about:
How the world evolves independently from the
agent, and
How the agent actions affects the world.
Goal-based agents:
These kind of agents take decision based on how far they
are currently from their goal. Their every action is
intended to reduce its distance from goal. This allows the
agents a way to choose among multiple possibilities,
selecting the one which reaches a goal state. The
knowledge that supports its decisions is represented
explicitly and can be modified, which makes these agents
more flexible. They usually require search and planning.
The goal based agent’s behavior can easily be changed.
Utility-based agents:
The agents which are developed having their end uses
building blocks are called utility based agents. When
there are multiple possible alternatives, then to decide
which one is best, utility based agents are used. They
choose actions based on preference(utility) for each
state. Sometimes achieving the desired goal is not
enough.
We may look for quicker, safer and cheaper trip to reach
a destination. Agent happiness should be taken into
consideration. Utility describes how happy the agent is.
Because of the uncertainty in the world, a utility agent
chooses the action that maximizes the expected utility.
A utility function maps a state onto a real number which
describes the associated degree of happiness.