0% found this document useful (0 votes)
14 views19 pages

AI LAB Contents

The document provides an overview of Python syntax, covering variables, data types, operators, control structures, functions, and modules. It includes implementations of K-Means clustering, uninformed and informed searching algorithms, Naive Bayes classification, simple and multiple linear regression, and inference through Prolog. Each section contains code examples to illustrate the concepts discussed.

Uploaded by

humayongujjar8
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as RTF, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views19 pages

AI LAB Contents

The document provides an overview of Python syntax, covering variables, data types, operators, control structures, functions, and modules. It includes implementations of K-Means clustering, uninformed and informed searching algorithms, Naive Bayes classification, simple and multiple linear regression, and inference through Prolog. Each section contains code examples to illustrate the concepts discussed.

Uploaded by

humayongujjar8
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as RTF, PDF, TXT or read online on Scribd

PYTHON SYNTAX AND OVERVIEW

# Variables and Data Types

- Variables: In Python, you don't need to declare variables before using them. You can assign a value to a
variable using the assignment operator (=).

x = 5 # integer variable

y = "Hello" # string variable

- Data Types: Python has several built-in data types, including:

- Integers (int): whole numbers, e.g., 1, 2, 3, etc.

- Floats (float): decimal numbers, e.g., 3.14, -0.5, etc.

- Strings (str): sequences of characters, e.g., "hello", 'hello', etc. Strings can be enclosed in single
quotes or double quotes.

- Boolean (bool): true or false values

- List (list): ordered collections of items, e.g., [1, 2, 3], ["a", "b", "c"], etc.

- Tuple (tuple): ordered, immutable collections of items, e.g., (1, 2, 3), ("a", "b", "c"), etc.

# Operators

- Arithmetic Operators:

- Addition: a + b

- Subtraction: a - b

- Multiplication: a * b

- Division: a / b

- Modulus: a % b

- Exponentiation: a ** b

- Comparison Operators:

- Equal: a == b
- Not Equal: a != b

- Greater Than: a > b

- Less Than: a < b

- Greater Than or Equal To: a >= b

- Less Than or Equal To: a <= b

- Logical Operators:

- And: a and b

- Or: a or b

- Not: not a

- Assignment Operators:

- Assign: a = b

- Add and Assign: a += b

- Subtract and Assign: a -= b

- Multiply and Assign: a *= b

- Divide and Assign: a /= b

- Modulus and Assign: a %= b

# Control Structures

- If-Else Statements:

x=5

if x > 10:

print("x is greater than 10")

elif x == 5:

print("x is equal to 5")

else:

print("x is less than 10")


- For Loops:

fruits = ["apple", "banana", "cherry"]

for fruit in fruits:

print(fruit)

- While Loops:

i=0

while i < 5:

print(i)

i += 1

# Functions

- Defining a Function:

def greet(name):

print("Hello, " + name + "!")

greet("John")

- Function Arguments:

def add(x, y):

return x + y

result = add(3, 5)

print(result)

# Modules

- Importing a Module:

import math

print([Link])

implementation of the K-Means clustering algorithm in Python:


import numpy as np

import [Link] as plt

class KMeans:

def __init__(self, k, max_iters=1000):

self.k = k

self.max_iters = max_iters

[Link] = None

[Link] = None

def _init_centroids(self, X):

[Link](0)

indices = [Link]([Link][0], self.k, replace=False)

[Link] = X[indices, :]

def _assign_labels(self, X):

distances = [Link](((X - [Link][:, [Link]])**2).sum(axis=2))

[Link] = [Link](distances, axis=0)

def _update_centroids(self, X):

for i in range(self.k):

points = X[[Link] == i]

if [Link][0] > 0:

[Link][i] = [Link](points, axis=0)

def fit(self, X):

self._init_centroids(X)

for _ in range(self.max_iters):

previous_centroids = [Link]
self._assign_labels(X)

self._update_centroids(X)

if [Link](previous_centroids == [Link]):

break

def predict(self, X):

distances = [Link](((X - [Link][:, [Link]])**2).sum(axis=2))

return [Link](distances, axis=0)

# Example usage

[Link](0)

X = [Link](100, 2)

kmeans = KMeans(k=3)

[Link](X)

labels = [Link]

centroids = [Link]

[Link](X[:, 0], X[:, 1], c=labels)

[Link](centroids[:, 0], centroids[:, 1], marker='*', c='red')

[Link]()

Uninformed Searching Algorithms

Uninformed searching algorithms, also known as blind search algorithms, are a type of search algorithm
that does not use any additional information about the problem other than the definition of the
problem.
Breadth-First Search (BFS)

BFS is a traversing algorithm where you start at a selected node (source node) and traverse the graph
layer by layer, exploring all the nodes at the current layer before moving to the next layer.

from collections import deque

def bfs(graph, root):

visited = set()

queue = deque([root])

[Link](root)

while queue:

vertex = [Link]()

print(vertex, end=" ")

for neighbour in graph[vertex]:

if neighbour not in visited:

[Link](neighbour)

[Link](neighbour)

# Example usage

graph = {

'A': ['B', 'C'],

'B': ['A', 'D', 'E'],

'C': ['A', 'F'],


'D': ['B'],

'E': ['B', 'F'],

'F': ['C', 'E']

bfs(graph, 'A')

Depth-First Search (DFS)

DFS is a traversing algorithm where you start at a selected node (source node) and explore as far as
possible along each branch before backtracking.

def dfs(graph, root, visited=None):

if visited is None:

visited = set()

[Link](root)

print(root, end=" ")

for neighbour in graph[root]:

if neighbour not in visited:

dfs(graph, neighbour, visited)

# Example usage

graph = {

'A': ['B', 'C'],


'B': ['A', 'D', 'E'],

'C': ['A', 'F'],

'D': ['B'],

'E': ['B', 'F'],

'F': ['C', 'E']

dfs(graph, 'A')

implementation of Informed Searching algorithms using Python:

# Greedy Search

Greedy search is an algorithm that explores the node with the lowest heuristic value.

import heapq

def greedy_search(graph, root, goal, heuristic):

queue = [(heuristic[root], root, [])]

visited = set()

while queue:

(cost, node, path) = [Link](queue)

if node not in visited:

[Link](node)
path = path + [node]

if node == goal:

return path

for neighbour, edge_cost in graph[node].items():

if neighbour not in visited:

[Link](queue, (heuristic[neighbour], neighbour, path))

return None

# Example usage

graph = {

'A': {'B': 2, 'C': 3},

'B': {'A': 2, 'D': 1, 'E': 1},

'C': {'A': 3, 'F': 5},

'D': {'B': 1},

'E': {'B': 1, 'F': 1},

'F': {'C': 5, 'E': 1}

heuristic = {

'A': 6,

'B': 3,

'C': 4,
'D': 1,

'E': 1,

'F': 0

path = greedy_search(graph, 'A', 'F', heuristic)

print("Path:", path)

# A* Search

A* search is an algorithm that explores the node with the lowest total cost (heuristic + cost).

import heapq

def a_star_search(graph, root, goal, heuristic):

queue = [(heuristic[root], root, [])]

visited = set()

while queue:

(cost, node, path) = [Link](queue)

if node not in visited:

[Link](node)

path = path + [node]


if node == goal:

return path

for neighbour, edge_cost in graph[node].items():

if neighbour not in visited:

total_cost = cost + edge_cost - heuristic[node] + heuristic[neighbour]

[Link](queue, (total_cost, neighbour, path))

return None

# Example usage

graph = {

'A': {'B': 2, 'C': 3},

'B': {'A': 2, 'D': 1, 'E': 1},

'C': {'A': 3, 'F': 5},

'D': {'B': 1},

'E': {'B': 1, 'F': 1},

'F': {'C': 5, 'E': 1}

heuristic = {

'A': 6,

'B': 3,

'C': 4,

'D': 1,

'E': 1,
'F': 0

path = a_star_search(graph, 'A', 'F', heuristic)

print("Path:", path)

# Naive Bayes Classification Algorithm


Overview

The Naive Bayes algorithm is a simple probabilistic classifier based on Bayes' theorem. It assumes that
the features of the data are independent of each other.

Implementation

import numpy as np

class NaiveBayes:

def __init__(self):

[Link] = None

[Link] = None

[Link] = None

[Link] = None

def fit(self, X, y):

n_samples, n_features = [Link]

n_classes = len([Link](y))

# Initialize parameters

[Link] = [Link](y)

[Link] = [Link]((n_classes, n_features))

[Link] = [Link]((n_classes, n_features))


[Link] = [Link](n_classes)

# Calculate mean, variance, and priors for each class

for idx, c in enumerate([Link]):

X_c = X[y == c]

[Link][idx, :] = X_c.mean(axis=0)

[Link][idx, :] = X_c.var(axis=0)

[Link][idx] = X_c.shape[0] / float(n_samples)

def predict(self, X):

y_pred = [self._predict(x) for x in X]

return [Link](y_pred)

def _predict(self, x):

posteriors = []

for idx, _ in enumerate([Link]):

prior = [Link]([Link][idx])

posterior = [Link]([Link](self._pdf(idx, x)))

posterior = prior + posterior

[Link](posterior)

return [Link][[Link](posteriors)]

def _pdf(self, class_idx, x):

mean = [Link][class_idx]

var = [Link][class_idx]
numerator = [Link](-((x - mean) ** 2) / (2 * var))

denominator = [Link](2 * [Link] * var)

return numerator / denominator

# Example usage

from [Link] import load_iris

from sklearn.model_selection import train_test_split

# Load iris dataset

iris = load_iris()

X = [Link]

y = [Link]

# Split dataset into training and test sets

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# Create and train Naive Bayes classifier

nb = NaiveBayes()

[Link](X_train, y_train)

# Make predictions on test set

y_pred = [Link](X_test)

# Evaluate classifier

accuracy = [Link](y_pred == y_test) / len(y_test)


print("Accuracy:", accuracy)

implementation of simple linear regression in Python:

# Simple Linear Regression

import numpy as np

class SimpleLinearRegression:

def __init__(self):

[Link] = None

[Link] = None

def fit(self, X, y):

n_samples = len(X)

X_mean = [Link](X)

y_mean = [Link](y)

numerator = [Link]((X - X_mean) * (y - y_mean))

denominator = [Link]((X - X_mean) ** 2)

[Link] = numerator / denominator

[Link] = y_mean - [Link] * X_mean

def predict(self, X):

return [Link] * X + [Link]

# Example usage
[Link](0)

X = [Link](100)

y = 3 * X + 2 + [Link](100) / 1.5

slr = SimpleLinearRegression()

[Link](X, y)

predicted_y = [Link](X)

print("Coefficient:", [Link])

print("Intercept:", [Link])

# Multiple Linear Regression

import numpy as np

class MultipleLinearRegression:

def __init__(self):

[Link] = None

[Link] = None

def fit(self, X, y):

X_b = np.c_[[Link](([Link][0], 1)), X]

theta_best = [Link](X_b.[Link](X_b)).dot(X_b.T).dot(y)
[Link] = theta_best[0]

[Link] = theta_best[1:]

def predict(self, X):

return [Link] + [Link]([Link])

# Example usage

[Link](0)

X = [Link](100, 3)

y = 3 * X[:, 0] + 2 * X[:, 1] + 1 * X[:, 2] + [Link](100)

mlr = MultipleLinearRegression()

[Link](X, y)

predicted_y = [Link](X)

print("Coefficients:", [Link])

print("Intercept:", [Link])

implementing inference through Prolog:


# Step 1: Define Facts

Facts are statements that are assumed to be true. In Prolog, facts are defined using the . notation. For
example

man(socrates).

man(plato).
mortal(socrates).

mortal(plato).

# Step 2: Define Rules

Rules are used to make inferences based on facts. In Prolog, rules are defined using the :- notation. For
example:

mortal(X) :- man(X).

This rule states that if X is a man, then X is mortal.

# Step 3: Query the Database

Once you have defined your facts and rules, you can query the database to make inferences. In Prolog,
queries are denoted by a ?.

For example:

?- mortal(socrates).

Prolog will respond with true if the query is true based on the facts and rules in the database.

# Step 4: Use Variables

Variables are used to represent unknown values. In Prolog, variables are denoted by uppercase letters.
For example:

?- mortal(X).

Prolog will respond with a list of values for X that make the query true.

# Example Use Case

Suppose we want to implement a simple expert system that determines whether a person is eligible for
a loan based on their credit score and income. We can define the following facts and rules:

credit_score(excellent, 750).

credit_score(good, 700).

credit_score(fair, 650).

income(high, 100000).

income(medium, 50000).

income(low, 20000).
eligible_for_loan(X) :- credit_score(excellent, X), income(high, X).

eligible_for_loan(X) :- credit_score(good, X), income(medium, X).

We can then query the database to determine whether a person with a credit score of 750 and an
income of 100000 is eligible for a loan:

?- eligible_for_loan(750).

Prolog will respond with true if the person is eligible for a loan based on the facts and rules in the
database.

You might also like