0% found this document useful (0 votes)
49 views9 pages

Unit I

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
49 views9 pages

Unit I

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Unit I: Artificial Intelligence (AI) and Its Subfields

1. Introduction to Artificial Intelligence

 Artificial Intelligence (AI) is the branch of computer science that aims to create
machines capable of performing tasks that normally require human intelligence.

 It combines concepts from computer science, mathematics, psychology,


neuroscience, linguistics, and philosophy.

 The goal of AI is to build systems that can:

o Perceive (sense the environment)

o Reason (make decisions)

o Learn (improve from experience)

o Act (perform actions to achieve goals)

Key Characteristics of AI:

 Problem-solving ability

 Adaptability and learning

 Decision-making under uncertainty

 Natural communication with humans (speech, text, gestures)

 Automation of intelligent behavior

History of AI

Early Foundations:

 Ancient Times: Philosophers like Aristotle studied reasoning and logic.

 1940s–50s: Development of digital computers made AI possible.

Important Milestones:

 1950 – Alan Turing introduced the concept of machine intelligence through the
Turing Test.

 1956 – Dartmouth Conference (John McCarthy, Marvin Minsky, Allen Newell,


Herbert Simon) officially coined the term Artificial Intelligence.

 1960s–70s: Rise of symbolic AI and expert systems.

 1980s: Knowledge-based systems became popular (used in medicine, engineering).


 1997: IBM’s Deep Blue defeated world chess champion Garry Kasparov.

 2000s: Growth of data and improved algorithms led to machine learning


breakthroughs.

 2010s–Present: Rise of deep learning and neural networks → applications in self-


driving cars, speech assistants (Siri, Alexa), healthcare AI, robotics.

Turing Test

 Proposed by: Alan Turing in 1950 in his paper "Computing Machinery and
Intelligence".

 Purpose: To test whether a machine can exhibit human-like intelligence.

Concept

 Turing suggested the question “Can machines think?” and proposed the Imitation
Game (now called the Turing Test).

 In the test:

1. A human evaluator interacts with two hidden entities through text (no
voice/physical presence).

2. One is a human and the other is a machine (AI).

3. If the evaluator cannot reliably distinguish which is human and which is


machine, then the machine is said to have shown intelligent behavior
equivalent to a human.

Significance

 First formal test of machine intelligence.

 Simple and practical way to evaluate AI systems.

Limitations

 Passing the test doesn’t mean the machine truly “understands” or “thinks”; it may
just mimic human-like responses.

 Focuses only on linguistic intelligence, ignoring other aspects (perception, creativity,


reasoning).

 Modern AI (like chatbots) can sometimes fool humans, but that does not equal
general intelligence.
AI Concepts, Applications & Challenges

4. Artificial General Intelligence (AGI)

 Artificial General Intelligence (AGI) refers to AI systems that can perform any
intellectual task that a human can do.

 Unlike Narrow AI (designed for specific tasks like translation, speech recognition, or
chess), AGI would have general cognitive abilities.

Characteristics of AGI:

 Learning across domains (can apply knowledge from one area to another)

 Reasoning and problem-solving in unfamiliar situations

 Understanding context like humans

 Self-awareness and autonomy

Current Status:

 As of today, AI is still in the stage of Narrow AI.

 AGI remains a theoretical goal; research is ongoing.

Industry Applications of AI

AI has become a core technology across multiple industries:

a) Healthcare

 Disease diagnosis (e.g., detecting cancer in scans)

 Drug discovery

 Personalized medicine

 Robotic surgeries

b) Finance

 Fraud detection

 Algorithmic trading

 Risk assessment and credit scoring

 Customer support chatbots

c) Retail & E-Commerce

 Personalized recommendations (Amazon, Flipkart)

 Inventory management
 Virtual shopping assistants

 Demand forecasting

d) Transportation

 Self-driving cars (Tesla, Waymo)

 Traffic management systems

 Route optimization (Google Maps, Uber)

e) Manufacturing

 Predictive maintenance

 Quality control using computer vision

 Supply chain optimization

 Smart robotics in production lines

f) Education

 Intelligent tutoring systems

 Personalized learning platforms

 AI-driven grading and feedback

g) Entertainment

 Recommendation engines (Netflix, YouTube, Spotify)

 Video game AI for realistic interactions

 Content creation (music, art, writing)

h) Agriculture

 Precision farming (crop monitoring, pest detection)

 Smart irrigation systems

 Yield prediction
a) Technical Challenges

 Data dependency: AI requires large, high-quality datasets.

 Bias and fairness: AI can inherit human or dataset biases.

 Explainability: Many AI models (especially deep learning) act as "black boxes."

 Generalization: Hard for AI to adapt outside trained scenarios.

b) Ethical & Social Challenges

 Job displacement due to automation.

 Privacy concerns (data misuse, surveillance).

 Ethical decision-making (e.g., autonomous cars in accidents).

 Accountability: Who is responsible if AI makes a mistake?

c) Research & Development Challenges

 Building AGI is still unsolved.

 High computational costs and energy consumption.

 Need for interdisciplinary collaboration (AI + psychology, law, ethics).

d) Legal & Policy Challenges

 Lack of global AI regulation and standards.

 Intellectual property issues in AI-generated content.

 National security and AI in warfare (autonomous weapons).

Knowledge Engineering

Definition: Knowledge Engineering is the process of designing, building, and maintaining


systems that use structured knowledge to solve complex problems. It involves acquiring,
representing, and applying knowledge in a machine-readable format.

Key Concepts:

 Knowledge Representation:

o Structuring knowledge in a way that computers can process (e.g., rules,


ontologies, semantic networks).

o Common methods:
 Rule-based systems: If-then rules (e.g., IF symptom = fever THEN
diagnose = flu).

 Ontologies: Hierarchical structures defining relationships between


concepts.

 Frames: Data structures for representing stereotypical situations.

 Semantic Networks: Graphs representing relationships between


objects.

Applications:

 Medical diagnosis systems.

 Decision support systems in finance and engineering.

 Knowledge management in organizations.

Machine Learning

Definition: Machine Learning (ML) is a subfield of AI that enables systems to learn from data
and improve their performance over time without being explicitly programmed.

Types of Machine Learning:

1. Supervised Learning:

o Uses labeled data (input-output pairs) to train models.

o Algorithms: Linear regression, logistic regression, support vector machines


(SVM), neural networks.

o Applications: Image classification, spam detection, stock price prediction.

2. Unsupervised Learning:

o Works with unlabeled data to find patterns or structures.

o Algorithms: K-means clustering, principal component analysis (PCA),


autoencoders.

o Applications: Customer segmentation, anomaly detection.

3. Reinforcement Learning:

o Learns by interacting with an environment, receiving rewards or penalties.

o Algorithms: Q-learning, Deep Q-Networks (DQN).

o Applications: Game playing (e.g., AlphaGo), robotics, resource management.


4. Semi-Supervised Learning:

o Combines labeled and unlabeled data for training.

o Useful when labeled data is scarce.

Deep Learning:

 A subset of ML using neural networks with multiple layers.

 Excels in handling large, complex datasets (e.g., image and speech recognition).

Key Concepts:

 Training and Testing: Splitting data into training (to build the model) and testing (to
evaluate performance).

 Features: Measurable properties of data used for learning (e.g., pixel values in
images).

 Overfitting and Underfitting: Overfitting occurs when a model learns noise in the
training data; underfitting occurs when it fails to capture patterns.

 Evaluation Metrics: Accuracy, precision, recall, F1-score, mean squared error.

Applications:

 Predictive analytics (e.g., sales forecasting).

 Recommendation systems (e.g., Netflix, Amazon).

 Autonomous driving (e.g., lane detection, obstacle avoidance)

Computer Vision (CV)

1. Introduction

 Computer Vision is a subfield of Artificial Intelligence (AI) and Computer Science that
enables machines to see, interpret, and understand visual information (images,
videos, real-world scenes).

 The goal is to make computers extract meaningful information from visual data, just
like humans do with their eyes and brain.

Example: A self-driving car using cameras to detect pedestrians, traffic signs, and other
vehicles.

2. How Computer Vision Works

Computer Vision involves several steps:


1. Image Acquisition – Capturing images or video (through cameras, sensors, scanners).

2. Preprocessing – Cleaning and enhancing the image (removing noise, resizing,


adjusting brightness).

3. Feature Extraction – Identifying important features (edges, shapes, colours,


textures).

4. Object Detection/Recognition – Identifying and labelling objects in the image.

5. Decision Making – Using AI/ML models to perform tasks (classify, track, predict, etc.).

Techniques Used in Computer Vision

 Image Processing – Filters, edge detection, segmentation.

 Machine Learning – Training models to recognize patterns.

 Deep Learning (Convolutional Neural Networks – CNNs) – Modern CV systems use


CNNs to achieve high accuracy in tasks like face recognition and medical image
analysis.

4. Applications of Computer Vision

1. Healthcare – Detecting tumors, analyzing X-rays/MRIs, surgery assistance.

2. Security & Surveillance – Face recognition, activity monitoring.

3. Autonomous Vehicles – Detecting lanes, traffic lights, pedestrians.

4. Retail & E-Commerce – Visual search (e.g., upload a product image to find similar
ones).

5. Agriculture – Monitoring crops, identifying diseases in plants.

6. Manufacturing – Quality inspection on assembly lines.

7. Social Media – Auto-tagging faces (e.g., Facebook, Instagram).

5. Challenges in Computer Vision

 Variations in lighting, angle, and environment make recognition difficult.

 Occlusion (objects partially hidden).


 Data requirements – Needs large labelled datasets for training.

 Computation cost – Deep learning models need powerful hardware.

 Bias and fairness – CV systems can be biased if trained on limited data.

You might also like