0% found this document useful (0 votes)
24 views21 pages

CS3491 Lab Manual

The document outlines various implementations of search algorithms, machine learning models, and neural networks, including breadth-first search, depth-first search, A* search, Gaussian and Multinomial Naive Bayes, decision trees, random forests, SVM, and deep learning models. Each section provides a programmatic approach with sample code and outputs demonstrating the functionality of the algorithms. The document serves as a comprehensive guide for implementing these algorithms using Python libraries such as scikit-learn and Keras.

Uploaded by

lyricglimplse
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views21 pages

CS3491 Lab Manual

The document outlines various implementations of search algorithms, machine learning models, and neural networks, including breadth-first search, depth-first search, A* search, Gaussian and Multinomial Naive Bayes, decision trees, random forests, SVM, and deep learning models. Each section provides a programmatic approach with sample code and outputs demonstrating the functionality of the algorithms. The document serves as a comprehensive guide for implementing these algorithms using Python libraries such as scikit-learn and Keras.

Uploaded by

lyricglimplse
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

Ex.

No: 1A Implementation of Uninformed search algorithms


(BREADTH FIRST SEARCH)

PROGRAM:
OUTPUT
EX.NO : 1B UNINFORMED SEARCH STRATEGIES
(DEPTH FIRST SEARCH)

PROGRAM

OUTPUT
EX.NO : 2A INFORMED SEARCH STRATEGIES
(A* SEARCH)
PROGRAM
OUTPUT
Ex. No:2b Informed search algorithms memory-bounded A*

Program :

from queue import PriorityQueue


import sys
def memory_bounded_a_star(start_node, goal_node, max_memory):
frontier = PriorityQueue()
frontier.put((0, start_node))
explored = set()
total_cost = {start_node: 0}
while not frontier.empty():
# Check if memory limit has been reached
if sys.getsizeof(explored) > max_memory:
return None
_, current_node = frontier.get()
if current_node == goal_node:
path = []
while current_node != start_node:
path.append(current_node)
current_node = current_node.parent
path.append(start_node)
path.reverse()
return path
explored.add(current_node)
for child_node, cost in current_node.children():
if child_node in explored:
continue
new_cost = total_cost[current_node] + cost
if child_node not in total_cost or new_cost < total_cost[child_node]:
total_cost[child_node] = new_cost
priority = new_cost + child_node.heuristic(goal_node)
frontier.put((priority, child_node))
return None
class Node:
def __init__(self, state, parent=None):
self.state = state
self.parent = parent
self.cost = 1
def __eq__(self, other):
return self.state == other.state
def __hash__(self):
return hash(self.state)
def heuristic(self, goal):
# Simple heuristic for demonstration purposes
return abs(self.state - goal.state)
def children(self):
# Generates all possible children of a given node
children = []
for action in [-1, 1]:
child_state = self.state + action
child_node = Node(child_state, self)
children.append((child_node, child_node.cost))
return children
# Example usage
start_node = Node(1)
goal_node = Node(10)
path = memory_bounded_a_star(start_node, goal_node, max_memory=1000000)
if path is None:
print("Memory limit exceeded.")
else:
print([node.state for node in path])

OUTPUT :

Path: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]


Ex. No:3A Implement naive Bayes models
(Gaussian Naive Bayes)

Program :
from sklearn.naive_bayes import GaussianNB
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
# Load iris dataset
data = load_iris()
X, y = data.data, data.target
# Split dataset into training and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# Train Gaussian Naive Bayes model
gnb = GaussianNB()
gnb.fit(X_train, y_train)
# Predict labels for test set
y_pred = gnb.predict(X_test)
# Calculate accuracy of predictions
accuracy = accuracy_score(y_test, y_pred)
# Print results
print("Accuracy:", accuracy)

Output :

Accuracy: 1.0
Ex. No:3b Implement naive Bayes models
(Multinomial Naive Bayes)

Program :
from sklearn.naive_bayes import MultinomialNB
from sklearn.feature_extraction.text import CountVectorizer
# Sample training data
train_data = ["this is a positive example", "this is a negative example", "another negative
example", "yet another negative example"]
train_labels = ["positive", "negative", "negative", "negative"]
# Sample test data
test_data = ["this is a test", "this example is negative"]
# Create a CountVectorizer to convert the text data into numerical features
vectorizer = CountVectorizer()
# Fit the vectorizer to the training data and transform the data
train_features = vectorizer.fit_transform(train_data)
# Create a Multinomial Naive Bayes classifier and train it on the training data
clf = MultinomialNB()
clf.fit(train_features, train_labels)
# Transform the test data using the same vectorizer
test_features = vectorizer.transform(test_data)
# Use the trained classifier to predict the class labels for the test data
predicted_labels = clf.predict(test_features)
# Print the predicted class labels for the test data
print(predicted_labels)

Output :

['negative' 'negative']
Ex. No: 4 Implement Bayesian Networks

Program:
import numpy as np
from scipy.stats import norm
node1 = ("Node1", [], {(): 0.5})
node2 = ("Node2", ["Node1"], {(True,): 0.7, (False,): 0.2})
node3 = ("Node3", ["Node1"], {(True,): 0.1, (False,): 0.8})

# Define a function to compute the joint probability of a set of nodes given their parent
def compute_joint_prob(nodes):
joint_prob = 1.0
for node in nodes:
name, parents, cpt = node
parents_val_tuple = tuple(node_val[parent] for parent in parents)
prob = cpt[parents_val_tuple]
joint_prob *= prob
return joint_prob

# Define a function to compute the probability of a node given its parents


def compute_cond_prob(node, node_val):
name, parents, cpt = node
parents_val_tuple = tuple(node_val[parent] for parent in parents)
prob = cpt[parents_val_tuple]
return prob

# Define a function to sample from a node given its parents


def sample_node(node, node_val):
prob = compute_cond_prob(node, node_val)
sample = np.random.binomial(1, prob)
return sample

# Sample from the network


node_val = {}
for node in [node1, node2, node3]:
sample = sample_node(node, node_val)
node_val[node[0]] = sample
# Compute the joint probability
joint_prob = compute_joint_prob([node1, node2, node3])
print("The joint probability is", joint_prob)

Output
The joint probability is 0.034999999999999996
Ex. No: 5 Build Regression Models

Program :
import numpy as np
import pandas as pd
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error
# Load data
data = pd.read_csv('data.csv')
# Split data into features and target
X = data.drop('target', axis=1)
y = data['target']
# Split data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# Train linear regression model
reg = LinearRegression()
reg.fit(X_train, y_train)
# Evaluate model
train_pred = reg.predict(X_train)
test_pred = reg.predict(X_test)
print('Train MSE:', mean_squared_error(y_train, train_pred))
print('Test MSE:', mean_squared_error(y_test, test_pred))

Output :

Train MSE: 0.019218


Test MSE: 0.022715
Ex. No: 6A Build decision trees

Program :
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
from sklearn.metrics import r2_score
# Load the dataset
dataset = pd.read_csv('housing.csv')
# Split the dataset into training and testing sets
X = dataset.drop('MEDV', axis=1)
y = dataset['MEDV']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# Train the model
model = LinearRegression()
model.fit(X_train, y_train)
# Make predictions on the test set
y_pred = model.predict(X_test)
# Evaluate the model
r2 = r2_score(y_test, y_pred)
print('Linear Regression R^2:', r2)

Output :

Decision Tree Accuracy: 1.0


Ex. No: 6B Build random forests

Program :
import pandas as pd
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score
# Load the dataset
iris = load_iris()
X = iris.data
y = iris.target
# Split the dataset into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# Train the model
model = RandomForestClassifier(n_estimators=100, random_state=42)
model.fit(X_train, y_train)
# Make predictions on the test set
y_pred = model.predict(X_test)
# Evaluate the model
accuracy = accuracy_score(y_test, y_pred)
print('Random Forest Accuracy:', accuracy)

Output :

Random Forest Accuracy: 1.0


Ex. No: 7 Build SVM models

Program:

# Importing required libraries


import numpy as np
from sklearn import svm
# Data points
X = np.array([[1, 0], [0, 1], [0, -1], [-1, 0]])
# Labels
y = np.array([1, 1, 2, 2])
# Building a Support Vector Machine (SVM)using linear kernel
model = svm.SVC(kernel='linear')
# Training the model
model.fit(X, y)
# Prediction on the unseen data
print(model.predict([[2, 2]]))

Output
[1]
Ex. No: 8 Implement ensembling techniques

Program :
from sklearn.datasets import load_breast_cancer
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
data = load_breast_cancer()
X = data.data
y = data.target
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
models = []
for i in range(10):
X_bag, _, y_bag, _ = train_test_split(X_train, y_train, test_size=0.5)
model = RandomForestClassifier(n_estimators=100, random_state=42)
model.fit(X_bag, y_bag)
y_pred = model.predict(X_test)
acc = accuracy_score(y_test, y_pred)
print(f"Model {i+1}: {acc}")
models.append(model)
y_preds = []
for model in models:
y_pred = model.predict(X_test)
y_preds.append(y_pred)
y_ensemble = sum(y_preds) / len(y_preds)
y_ensemble = [int(round(y)) for y in y_ensemble]
acc_ensemble = accuracy_score(y_test, y_ensemble)
print(f"Ensemble: {acc_ensemble}")

Output :
Model 1: 0.9649122807017544
Model 2: 0.9473684210526315
Model 3: 0.956140350877193
Model 4: 0.9649122807017544
Model 5: 0.956140350877193
Model 6: 0.9649122807017544
Model 7: 0.956140350877193
Model 8: 0.956140350877193
Model 9: 0.956140350877193
Model 10: 0.9736842105263158
Ensemble: 0.956140350877193
Ex. No: 9A Implement clustering algorithms
(Hierarchical clustering)

Program:

import numpy as np
from scipy.cluster.hierarchy import dendrogram, linkage
import matplotlib.pyplot as plt
# Generate sample data
X = np.array([[1,2], [1,4], [1,0], [4,2], [4,4], [4,0]])
# Perform hierarchical clustering
Z = linkage(X, 'ward')
# Plot dendrogram
plt.figure(figsize=(10, 5))
dendrogram(Z)
plt.show()

Output :
Ex. No: 9b Implement clustering algorithms
(Density-based clustering)

Program:

from sklearn.cluster import DBSCAN


from sklearn.datasets import make_blobs
import matplotlib.pyplot as plt
# Generate some sample data
X, y = make_blobs(n_samples=1000, centers=3, random_state=42)
# Perform density-based clustering using the DBSCAN algorithm
db = DBSCAN(eps=0.5, min_samples=5).fit(X)
# Extract the labels and number of clusters
labels = db.labels_
n_clusters = len(set(labels)) - (1 if -1 in labels else 0)
# Plot the clustered data
plt.scatter(X[:,0], X[:,1], c=labels)
plt.title(f"DBSCAN clustering - {n_clusters} clusters")
plt.show()

Output :
Ex. No: 10 Implement EM for Bayesian networks

Program :

from pgmpy.models import BayesianModel


from pgmpy.estimators import MaximumLikelihoodEstimator
from pgmpy.inference import VariableElimination
import numpy as np
# Define the Bayesian network structure
model = BayesianModel([('C', 'F')])
# Define the parameters of the network
cpd_C = TabularCPD('C', 2, [[0.6], [0.4]])
cpd_F = TabularCPD('F', 2, [[0.8, 0.3], [0.2, 0.7]], evidence=['C'], evidence_card=[2])
model.add_cpds(cpd_C, cpd_F)
# Generate some synthetic data
data = pd.DataFrame({'C': np.random.choice([0, 1], size=100, p=[0.6, 0.4]),
'F': np.random.choice([0, 1], size=100, p=[0.8, 0.2])})
# Initialize the model parameters using maximum likelihood estimation
mle = MaximumLikelihoodEstimator(model, data)
cpd_C = mle.estimate_cpd('C')
cpd_F = mle.estimate_cpd('F', evidence=['C'])
# Define the EM algorithm
for i in range(10):
# E-step: compute the expected sufficient statistics of the hidden variable C
infer = VariableElimination(model)
evidence = data.to_dict('records')
qs = infer.query(['C'], evidence=evidence)
p_C = qs['C'].values
# M-step: update the model parameters
cpd_C = TabularCPD('C', 2, [p_C.tolist()])
cpd_F = mle.estimate_cpd('F', evidence=['C'], prior_params={'C': p_C})
# Update the model
model.add_cpds(cpd_C, cpd_F)
# Print the learned parameters
print(cpd_C)
print(cpd_F)
OUTPUT :
╒═════╤═══════╕
│ C_0 │ 0.686 │
├─────┼───────┤
│ C_1 │ 0.314 │
╘═════╧═══════╛
╒═════╤═════╤═════╕
│ C │ C_0 │ C_1 │
├─────┼─────┼─────┤
│ F_0 │ 0.8 │ 0.3 │
├─────┼─────┼─────┤
│ F_1 │ 0.2 │ 0.7 │
╘═════╧═════╧═════
Ex. No: 11 Build simple NN models

Program

#import libraries
import numpy as np
import matplotlib.pyplot as plt
from sklearn.neural_network import MLPClassifier

#create a sample dataset


x = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
y = np.array([0, 1, 1, 0])

#create and train the model


model = MLPClassifier(hidden_layer_sizes=(2), activation='relu', solver='lbfgs')
model.fit(x, y)

#make predictions
predictions = model.predict([[2, 2], [2, 3]])
print(predictions)

#visualize the results


plt.scatter(x[:,0], x[:,1], c=y)
plt.xlabel('x1')
plt.ylabel('x2')
plt.title('Neural Network Model')
plt.show()

Output:
Ex. No: 12 Build deep learning NN models

Program :
import numpy as np
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.utils import np_utils
# Load MNIST dataset
(X_train, y_train), (X_test, y_test) = mnist.load_data()
# Reshape the input data to a 1D array
X_train = X_train.reshape(X_train.shape[0], 784)
X_test = X_test.reshape(X_test.shape[0], 784)
# Convert data type to float32 and normalize the input data to values between 0 and 1
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_train /= 255
X_test /= 255
# Convert the target variable to categorical
y_train = np_utils.to_categorical(y_train, 10)
y_test = np_utils.to_categorical(y_test, 10)
# Define the model architecture
model = Sequential()
model.add(Dense(512, input_shape=(784,), activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(512, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(10, activation='softmax'))
# Compile the model
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
# Train the model
model.fit(X_train, y_train, batch_size=128, epochs=10, verbose=1, validation_data=(X_test,
y_test))
# Evaluate the model on the test data
score = model.evaluate(X_test, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])

Output:
Test loss: 0.067
Test accuracy: 0.978

You might also like