0% found this document useful (0 votes)
36 views11 pages

UNIT1

The document provides an overview of Artificial Intelligence (AI), covering its definition, foundations, history, and current state of the art. It discusses key concepts such as intelligent agents, machine learning, and ethical considerations, as well as significant milestones in AI development from its inception to the present. Additionally, it highlights advancements in generative AI, large language models, and applications across various fields, while addressing challenges and future outlooks for AI technology.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
36 views11 pages

UNIT1

The document provides an overview of Artificial Intelligence (AI), covering its definition, foundations, history, and current state of the art. It discusses key concepts such as intelligent agents, machine learning, and ethical considerations, as well as significant milestones in AI development from its inception to the present. Additionally, it highlights advancements in generative AI, large language models, and applications across various fields, while addressing challenges and future outlooks for AI technology.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 11

UNIT-I Introduction: What is AI, Foundations of AI, History of AI, The State of Art.

Intelligent Agents: Agents and Environments, Good Behaviour: The Concept of Rationality,
The Nature of Environments, The Structure of Agents.

Artificial intelligence (AI) refers to the simulation of human intelligence in machines that are
programmed to think and learn like humans, enabling them to perform tasks that typically
require human cognitive functions. This includes capabilities like perception, reasoning,
learning, problem-solving, and even creativity. AI systems learn and improve through
exposure to data, identifying patterns and relationships to enhance their performance.

or

AI is the ability of a computer system to perform tasks that normally require human
intelligence, such as learning, reasoning, problem-solving, understanding language, and
recognizing images or patterns.

Foundations of AI

The foundations of Artificial Intelligence (AI) are rooted in several key areas: data, machine
learning, neural networks, and deep learning. These concepts enable AI systems to learn from
data, identify patterns, and make predictions or decisions. AI's development also draws from
philosophical inquiries about the mind and reasoning, as well as insights from cognitive
science.
Here's a more detailed look at the foundational elements:
 Data:
AI systems rely on vast amounts of data to learn and extract meaningful patterns. This data
is used to train algorithms and build predictive models.
 Machine Learning (ML):
ML is a core component of AI, allowing systems to learn from data without explicit
programming. It enables tasks like classification, regression, clustering, and pattern
recognition.
 Neural Networks:
Inspired by the human brain, neural networks are mathematical models that process data
through interconnected nodes (neurons). They are fundamental to many AI applications,
particularly in areas like image and text recognition.
 Deep Learning:
Deep learning utilizes artificial neural networks with multiple layers (deep architectures) to
analyze complex datasets. It has achieved significant breakthroughs in areas like image and
speech recognition.
 Knowledge Representation:
This involves organizing information in a way that computers can understand, store, and
retrieve it. It's crucial for enabling AI systems to effectively utilize and reason with data.
 Ethics:
As AI systems become more powerful, it's crucial to consider the ethical implications of
their use. This includes issues like bias, fairness, and transparency.

 Natural Language Processing (NLP):


NLP focuses on enabling computers to understand and process human language. It's a
critical area for building conversational AI and other applications that interact with humans
through language.
 Computer Vision:
This field enables computers to "see" and interpret images and videos. It has applications in
areas like facial recognition, object detection, and autonomous vehicles.
 Foundation Models:
These are large-scale pre-trained models that can be adapted to various tasks, making them
highly versatile and valuable in different AI applications.

History of AI:

The history of Artificial Intelligence (AI) is marked by periods of rapid advancement and
periods of limited progress, often referred to as "AI winters". Early concepts of thinking
machines date back centuries, but the field as a scientific discipline emerged in the mid-20th
century. Key milestones include the development of the Turing Test, the coining of the term
"artificial intelligence", and the creation of early AI systems using symbolic reasoning.

1. Early Ideas and Foundations (Pre-1950s)

 Ancient Times:
Stories and myths (e.g., Greek myth of Talos, Chinese automata) hinted at artificial
beings with intelligence.
 Philosophical Roots:
Philosophers like Aristotle and Descartes explored logic, reasoning, and the mind—
laying conceptual groundwork for AI.
 1943 – Neural Model:
Warren McCulloch and Walter Pitts proposed a computational model of artificial
neurons, forming the base for neural networks.

2. Birth of AI as a Field (1950–1956)

 1950 – Turing Test:


Alan Turing’s paper “Computing Machinery and Intelligence” proposed a method
(Turing Test) to evaluate machine intelligence.
 1956 – Dartmouth Conference:
The Dartmouth Workshop, led by John McCarthy, formally established AI as a
scientific field. McCarthy coined the term “Artificial Intelligence.”
3. Early AI & the First AI Winter (1950s–1970s)

 Symbolic Reasoning & Logic-Based AI:


Programs like Logic Theorist and General Problem Solver attempted to mimic human
logic using rules and symbols.
 Limitations:
Lack of processing power and unrealistic expectations led to disappointing progress.
 AI Winter:
Disillusionment caused funding cuts and slowed research between the late 1970s and
mid-1980s.

4. Resurgence & Expert Systems (1980s–1990s)

 Expert Systems:
AI focused on domain-specific knowledge. Example: MYCIN (for medical
diagnosis). These systems used IF–THEN rules to make decisions like human
experts.
 Renewed Funding & Applications:
Governments and industries began investing again due to practical applications in
healthcare, business, and defense.

5. Modern AI and the Deep Learning Revolution (2000s–Present)

 Machine Learning (ML):


Shifted from rule-based systems to data-driven learning. Systems began to learn
patterns from data.
 2012 – Deep Learning Breakthrough:
Use of GPUs and large datasets (e.g., ImageNet) enabled deep neural networks to
achieve high accuracy in tasks like image recognition (AlexNet).
 Natural Language Processing (NLP):
Tools like Google Translate and Siri became popular.

6. The AI Boom and Generative AI Era (2020s)

 Generative AI:
Models like GPT, DALL·E, and ChatGPT can create text, images, and code.
Applications in education, design, research, and more.
 Ethical Concerns:
Rise in debates around bias, misinformation, job displacement, and responsible AI
development.

Key Figures in AI History


Name Contribution
Proposed the Turing Test; foundational thinker in theoretical
Alan Turing
computation.
Coined the term Artificial Intelligence; organized the Dartmouth
John McCarthy
Conference.
Frank Rosenblatt Created the Perceptron, an early neural network model.

The State of Art.:

The "state of the art" in artificial intelligence (AI) refers to the most advanced and current
level of development in the field, encompassing the latest techniques, technologies, and
knowledge. In AI, this often means the best-performing models and algorithms for specific
tasks, pushing the boundaries of what AI can achieve. Recent advancements have been
particularly notable in areas like deep learning, natural language processing, and computer
vision, with applications ranging from self-driving cars to medical diagnoses.

🧠 1. Leading AI Technologies (2025)

1.1. Generative AI

 Definition: AI models that generate new content (text, images, code, audio, video).
 Popular Models:
o GPT-4o / GPT-4.5: Advanced language models for reasoning, coding, and
dialogue.
o DALL·E 3: Generates realistic and artistic images from text prompts.
o Sora (by OpenAI): Generates short videos from text prompts.
 Applications:
o Content creation, virtual assistants, marketing, game design, filmmaking.

1.2. Large Language Models (LLMs)

 Capabilities: Text generation, translation, summarization, question-answering,


reasoning.
 Trends:
o Multimodal models: Handle text, image, audio, and video together.
o Open-source LLMs: LLaMA, Mistral, Falcon gaining traction.
o Context windows: Models now handle >100,000 tokens of memory (long
documents).

1.3. Computer Vision

 Tasks: Image recognition, object detection, facial recognition, scene understanding.


 State-of-the-art models: Vision Transformers (ViTs), CLIP (OpenAI), SAM
(Segment Anything Model).
 Applications: Surveillance, medical imaging, autonomous driving, AR/VR.

1.4. Robotics & Embodied AI

 Trends:
o AI controlling physical robots for complex tasks.
o Boston Dynamics, Tesla Optimus: Human-like robots with learning abilities.
 Embodied AI: Combines perception, movement, and decision-making in physical
environments.

1.5. Autonomous Systems

 Self-driving Cars: Waymo, Tesla, Cruise making progress in urban navigation.


 Drones & Delivery Bots: Used for logistics, agriculture, military.

📊 2. Key Research Trends


Area Advances & Focus

Multimodal AI Text + Image + Audio + Video models (e.g., GPT-4o)

AI Reasoning Improved logical thinking and step-by-step problem solving

Reinforcement Learning Used in robotics, games (e.g., AlphaGo, AlphaStar)

AI Safety & Ethics Fairness, bias detection, transparency, alignment

Training AI models without centralizing user data (privacy-first


Federated Learning
AI)

3. Real-World Applications

 Healthcare: AI-assisted diagnosis, drug discovery, robotic surgery.


 Education: AI tutors, personalized learning platforms.
 Finance: Fraud detection, algorithmic trading, risk analysis.
 Law: Legal research, contract analysis, AI-assisted judgments.
 Creative Arts: Music composition, film scripts, digital art, video generation.
4. Challenges and Ethical Issues

 Bias & Fairness: Ensuring models are free from discrimination.


 Misinformation: Deepfakes and fake news generation.
 Job Displacement: Automation replacing routine or cognitive jobs.
 Security Risks: Misuse of powerful AI tools (e.g., for cybercrime or propaganda).
 AI Alignment: Ensuring AI systems act in line with human values and intentions.

5. The Future Outlook

 AI Agents with autonomy, memory, and decision-making capabilities.


 AI-human collaboration in creative work, science, and policy.
 Regulations and governance to control and guide responsible AI use.
 Human-like AI with emotions, adaptive learning, and social intelligence.

Intelligent Agents:
An AI system is composed of an agent and its environment. The agents act in their
environment. The environment may contain other agents.

What are Agent and Environment?


An agent is anything that can perceive its environment through sensors and acts upon that
environment through effectors.

A human agent has sensory organs such as eyes, ears, nose, tongue and skin parallel to the
sensors, and other organs such as hands, legs, mouth, for effectors.

A robotic agent replaces cameras and infrared range finders for the sensors, and various
motors and actuators for effectors.

A software agent has encoded bit strings as its programs and actions.
Agent Terminology
Performance Measure of Agent − It is the criteria, which determines how successful an agent
is.
Behavior of Agent − It is the action that agent performs after any given sequence of percepts.
Percept − It is agents perceptual inputs at a given instance.
Percept Sequence − It is the history of all that an agent has perceived till date.
Agent Function − It is a map from the precept sequence to an action.
Rationality
Rationality is nothing but status of being reasonable, sensible, and having good sense of
judgment.
Rationality is concerned with expected actions and results depending upon what the agent has
perceived. Performing actions with the aim of obtaining useful information is an important
part of rationality.
What is Ideal Rational Agent?
An ideal rational agent is the one, which is capable of doing expected actions to maximize its
performance measure, on the basis of −
Its percept sequence
Its built-in knowledge base
Rationality of an agent depends on the following −
The performance measures, which determine the degree of success.
Agents Percept Sequence till now.
The agents prior knowledge about the environment.
The actions that the agent can carry out.
A rational agent always performs right action, where the right action means the action that
causes the agent to be most successful in the given percept sequence. The problem the agent
solves is characterized by Performance Measure, Environment, Actuators, and Sensors
(PEAS).
The Structure of Intelligent Agents
Agents structure can be viewed as −

Agent = Architecture + Agent Program


Architecture = the machinery that an agent executes on.
Agent Program = an implementation of an agent function.
Simple Reflex Agents
They choose actions only based on the current percept.
They are rational only if a correct decision is made only on the basis of current precept.
Their environment is completely observable.

Model Based Reflex Agents


They use a model of the world to choose their actions. They maintain an internal state.

Model − knowledge about how the things happen in the world.

Internal State − It is a representation of unobserved aspects of current state depending on


percept history.

Updating the state requires the information about −

How the world evolves.


How the agents actions affect the world.
Model Based Reflex Agents
They use a model of the world to choose their actions. They maintain an internal state.

Model − knowledge about how the things happen in the world.

Internal State − It is a representation of unobserved aspects of current state depending on


percept history.

Updating the state requires the information about −

How the world evolves.


How the agents actions affect the world.

Goal Based Agents


They choose their actions in order to achieve goals. Goal-based approach is more flexible
than reflex agent since the knowledge supporting a decision is explicitly modeled, thereby
allowing for modifications.

Goal − It is the description of desirable situations.


Utility Based Agents
They choose actions based on a preference (utility) for each state.

Goals are inadequate when −

There are conflicting goals, out of which only few can be achieved.

Goals have some uncertainty of being achieved and you need to weigh likelihood of success
against the importance of a goal.
Good Behaviour:

You might also like