Gnanamani College of Technology
A.K.Samuthiram, Pachal (PO), Namakkal – 637 018
Department of Mechanical Engineering
(Accredited by NBA & NAAC with ‘A’ Grade)
INSTITUTE
VISION
Emerging as a technical institution of high standard and excellence to produce quality Engineers,
Researchers, Administrators and Entrepreneurs with ethical and moral values to contribute the
sustainable development of the society.
MISSION
We facilitate our students
To have in-depth domain knowledge with analytical and practical skills in cutting edge
technologies by imparting quality technical education.
To be industry ready and multi-skilled personalities to transfer technology to industries and rural
areas by creating interests among students in Research and Development and Entrepreneurship.
DEPARTMENT
VISION
To produce competent Mechanical Engineer capable of working in an interdisciplinary
environment contributing to benefits of society through innovation, leadership and
entrepreneurship.
MISSION
Imparting the highest quality education through state-of-the-art facilities to build students’
professional practice and make them globally competitive Mechanical Engineers by enhancing
their knowledge.
Fostering professional and ethical values and training the students to build leadership and
entrepreneurship qualities for their career development.
Undertaking research and developmental activities to provide service for the sustainable
development of the society.
PROGRAM EDUCATIONAL OBJECTIVES (PEOs)
Graduates of Mechanical Engineering will
PEO 1: Apply their mechanical and allied knowledge to address technical and societal problems
with creativity and ethical values.
PEO 2: Design and analyze mechanical systems with strong fundamentals and work in
synchronization with industries and research organizations as team members on multi-
disciplinary projects
PEO 3: Seek out positions of leadership actively within their profession and their community
through lifelong learning.
Gnanamani College of Technology
A.K.Samuthiram, Pachal (PO), Namakkal – 637 018
Department of Mechanical Engineering
(Accredited by NBA & NAAC with ‘A’ Grade)
PROGRAM OUTCOMES (POs)
1. Engineering knowledge: Apply the knowledge of mathematics, science, engineering fundamentals and
an engineering specialization to the solution of complex engineering problems.
2. Problem analysis: Identify, formulate, review research literature, and analyze complex engineering
problems reaching substantiated conclusions using first principles of mathematics, natural sciences, and
engineering sciences.
3. Design/development of solutions: Design solutions for complex engineering problems and design
system components or processes that meet the specified needs with appropriate consideration for the
public health and safety, and the cultural, societal, and environmental considerations.
4. Conduct investigations of complex problems: Use research-based knowledge and research methods
including design of experiments, analysis and interpretation of data, and synthesis of the information
to provide valid conclusions.
5. Modern tool usage: Create, select, and apply appropriate techniques, resources, and modern
engineering and IT tools including prediction and modeling to complex engineering activities with an
understanding of the limitations.
6. The engineer and society: Apply reasoning informed by the contextual knowledge to assess societal,
health, safety, legal and cultural issues and the consequent responsibilities relevant to the professional
engineering practice.
7. Environment and sustainability: Understand the impact of the professional engineering solutions in
societal and environmental contexts, and demonstrate the knowledge of, and need for sustainable
development.
8. Ethics: Apply ethical principles and commit to professional ethics and responsibilities and norms of the
engineering practice.
9. Individual and team work: Function effectively as an individual, and as a member or leader in diverse
teams, and in multidisciplinary settings.
10. Communication: Communicate effectively on complex engineering activities with the engineering
community and with society at large, such as, being able to comprehend and write effective reports
and design documentation, make effective presentations, and give and receive clear instructions.
11. Project management and finance: Demonstrate knowledge and understanding of the engineering
and management principles and apply these to one’s own work, as a member and leader in a team, to
manage projects and in multidisciplinary environments.
12. Life-long learning: Recognize the need for, and have the preparation and ability to engage in
independent and life-long learning in the broadest context of technological change.
PROGRAM SPECIFIC OUTCOMES (PSOs)
Graduates of the program will be able to
PSO-1: Apply principles of basic sciences, machine design, manufacturing, thermal engineering and
management to identify, formulate and solve real time problems and societal issues for the
sustainable development.
PSO-2: Develop their abilities to qualify for Employment, Higher studies and Research in Mechanical
Engineering.
OCS351 SYLLABUS L T P C
2 0 2 3
Artificial Intelligence and Machine Learning
Fundamentals Laboratory
OBJECTIVE
1. Understand the importance, principles, and search methods of AI
2. Provide knowledge on predicate logic and Prolog.
3. Introduce machine learning fundamentals
4. Study of supervised learning algorithms.
5. Study about unsupervised learning algorithms.
LIST OF EXPERIMENTS
1. Implement breadth first search
2. Implement depth first search
3. Analysis of breadth first and depth first search in terms of time and space
4. Implement and compare Greedy and A* algorithms.
5. Implement the non-parametric locally weighted regression algorithm in order to fit data
points. Select appropriate data set for your experiment and draw graphs.
6. Write a program to demonstrate the working of the decision tree based algorithm.
7. Build an artificial neural network by implementing the back propagation algorithm and
test the same using appropriate data sets.
8. Write a program to implement the naive Bayesian classifier.
9. Implementing neural network using self-organizing maps
10. Implementing k-Means algorithm to cluster a set of data.
11. Implementing hierarchical clustering algorithm.
OUTCOMES
At the end of the course the students would be able to
1. Understand the foundations of AI and the structure of Intelligent Agents
2. Use appropriate search algorithms for any AI problem
3. Study of learning methods
4. Solving problem using Supervised learning
5. Solving problem using Unsupervised learning
CONTENTS
S. NO NAME OF Remarks
EXPERIMENTS
1 Implement breadth first search
2 Implement depth first search
3 Analysis of breadth first and depth first search in terms of time
and space
4 Implement and compare Greedy and A* algorithms.
5 Implement the non-parametric locally weighted regression
algorithm in order to fit data points. Select appropriate data set
for your experiment and draw graphs.
6 Write a program to demonstrate the working of the decision tree
based algorithm.
7 Build an artificial neural network by implementing the back
propagation algorithm and test the same using appropriate data
sets.
8 Write a program to implement the naive Bayesian classifier.
9 Implementing neural network using self-organizing maps
10 Implementing k-Means algorithm to cluster a set of data.
11 Implementing hierarchical clustering algorithm.
Ex no: 1
Implement breadth first search
Aim
To create a program to implement the breadth first search.
Algorithm
Step1: Initially queue and visited arrays are empty.
Step2: Push 0 into queue and mark it visited.
Step 3: Remove 0 from the front of queue and visit the unvisited neighbours and push
them into queue.
Step 4: Remove node 1 from the front of queue and visit the unvisited neighbours and
push them into queue.
Step 5: Remove node 2 from the front of queue and visit the unvisited neighbours and
push them into queue.
Step 6: Remove node 3 from the front of queue and visit the unvisited neighbours and
push them into queue.
Steps 7: Remove node 4 from the front of queue and visit the unvisited neighbours
and push them into queue.
Source code
graph = {
'5' : ['3','7'],
'3' : ['2', '4'],
'7' : ['8'],
'2' : [],
'4' : ['8'],
'8' : []
}
visited = []
queue = []
def bfs(visited, graph, node):
visited.append(node)
queue.append(node)
while queue:
m = queue.pop(0)
print (m, end = " ")
for neighbour in graph[m]:
if neighbour not in visited:
visited.append(neighbour)
queue.append(neighbour)
print("Breadth-First Search")
bfs(visited, graph, '5')
1
Output
Breadth-First Search
537248
Result
Hence the above breadth first search in Program has been executed successfully.
****************************************************************
2
Ex no: 2
Implement depth first search
Aim
To create a program to implement the depth first search.
Algorithm
Step1: Start by putting any one of the graph's vertices on top of a stack.
Step2: Take the top item of the stack and add it to the visited list.
Step3: Create a list of that vertex's adjacent nodes. Add the ones which aren't in the
visited list to the top of the stack.
Step4: Keep repeating steps 2 and 3 until the stack is empty.
Source code
def dfs(graph, start, visited=None):
if visited is None:
visited = set()
visited.add(start)
print(start)
for next in graph[start] - visited:
dfs(graph, next, visited)
return visited
graph = {'0': set(['1', '2']),
'1': set(['0', '3', '4']),
'2': set(['0']),
'3': set(['1']),
'4': set(['2', '3'])}
dfs(graph, '0')
Output
0
1
4
3
2
3
2
Result
Hence the above depth first search Program has been executed successfully.
****************************************************************
3
Ex no: 3 Analysis of breadth first and depth first search in terms of time
and space
Aim
To Analysis of breadth first and depth first search in terms of time and space.
Algorithm
Breadth-First Search (BFS)
Time Complexity:
BFS explores each node and edge in the graph exactly once.
For a graph with VVV vertices and EEE edges, each vertex is enqueued and dequeued
once, and each edge is examined once.
Therefore, the time complexity of BFS is O(V+E).
Space Complexity:
The space complexity of BFS is primarily determined by the queue used to store the
nodes to be explored.
In the worst case, the queue can contain up to O(V) nodes if the graph is very wide.
Additionally, the space required to store the visited set is O(V).
Hence, the total space complexity of BFS is O(V).
Depth-First Search (DFS)
Time Complexity:
DFS also explores each node and edge in the graph exactly once.
For a graph with VVV vertices and EEE edges, each vertex is visited once and each
edge is examined once.
Therefore, the time complexity of DFS is O(V+E).
Space Complexity:
The space complexity of DFS depends on the depth of the recursion stack (for the
recursive implementation) or the size of the stack (for the iterative implementation).
In the worst case, the depth of the recursion stack can be equal to the number of
vertices VVV (e.g., if the graph is a single long path).
Additionally, the space required to store the visited set is O(V).
Hence, the total space complexity of DFS is O(V).
Source code
BFS Implementation
from collections import deque
def bfs(graph, start):
visited = set()
queue = deque([start])
result = []
while queue:
node = queue.popleft()
if node not in visited:
result.append(node)
visited.add(node)
for neighbor in graph[node]:
4
if neighbor not in visited:
queue.append(neighbor)
return result
graph = {
'A': ['B', 'C'],
'B': ['A', 'D', 'E'],
'C': ['A', 'F'],
'D': ['B'],
'E': ['B', 'F'],
'F': ['C', 'E']
}
print(bfs(graph, 'A'))
DFS Implementation (Recursive)
def dfs(graph, start, visited=None, result=None):
if visited is None:
visited = set()
if result is None:
result = []
visited.add(start)
result.append(start)
for neighbor in graph[start]:
if neighbor not in visited:
dfs(graph, neighbor, visited, result)
return result
print(dfs(graph, 'A')) # Output: ['A', 'B', 'D', 'E', 'F', 'C']
Output
['A', 'B', 'C', 'D', 'E', 'F']
['A', 'B', 'D', 'E', 'F', 'C']
Result
Hence the above depth Analysis of breadth first and depth first search in terms of time
and space Python Program has been executed successfully.
****************************************************************
5
Ex no: 4
Implement and compare Greedy and A* algorithms.
Aim
To Implement and compare Greedy and A* algorithms Python Programming.
Algorithm
Step1: Define the problem: Clearly state the problem to be solved and the objective
to be optimized.
Step2: Identify the greedy choice: Determine the locally optimal choice at each step
based on the current state.
Step3: Make the greedy choice: Select the greedy choice and update the current
state.
Step4: Repeat: Continue making greedy choices until a solution is reached.
Source code
Greedy Best-First Search
def greedy_coin_change(amount, denominations):
denominations.sort(reverse=True)
coins_used = []
for coin in denominations:
while amount >= coin:
coins_used.append(coin)
amount -= coin
return coins_used
amount_to_change = 63
coin_denominations = [1, 2, 5, 10, 20, 50]
result = greedy_coin_change(amount_to_change, coin_denominations)
print("Coins Used:", result)
print("Total Coins:", len(result))
A* Search
def a_star_search(graph, start, goal):
frontier = []
heapq.heappush(frontier, (0, start))
came_from = {start: None}
cost_so_far = {start: 0}
while frontier:
current = heapq.heappop(frontier)[1]
if current == goal:
break
6
for neighbor in graph.neighbors(current):
new_cost = cost_so_far[current] + graph.get_cost(current, neighbor)
if neighbor not in cost_so_far or new_cost < cost_so_far[neighbor]:
cost_so_far[neighbor] = new_cost
priority = new_cost + heuristic(neighbor, goal)
heapq.heappush(frontier, (priority, neighbor))
came_from[neighbor] = current
return reconstruct_path(came_from, start, goal)
print("A* Search:", a_star_search(graph, 'A', 'F'))
Output
Coins Used: [50, 10, 2, 1]
Total Coins: 4
A* Search: ['A', 'B', 'E', 'F']
Result
Hence the above Greedy and A* algorithms program has been executed successfully.
****************************************************************
7
Implement the non-parametric locally weighted regression
Ex no: 5
algorithm in order to fit data points. Select appropriate data set for
your experiment and draw graphs.
Aim
To write the non-parametric locally weighted regression algorithm in order to fit data
points. Select appropriate data set for your experiment and draw graphs by using Python
Programming.
Algorithm
Step1: Read the Given data Sample to X and the curve (linear or non linear) to Y
Step2: Set the value for Smoothening parameter or Free parameter say τ
Step3: Set the bias /Point of interest set x0 which is a subset of X
Step4: Determine the weight matrix using :
Step5: Determine the value of model term parameter β using:
Step6: Prediction = x0*β.
Source code
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
def kernel(point, xmat, k):
m,n = np.shape(xmat)
weights = np.mat(np1.eye((m)))
for j in range(m):
diff = point - X[j]
weights[j,j] = np.exp(diff*diff.T/(-2.0*k**2))
return weights
def localWeight(point, xmat, ymat, k):
wei = kernel(point,xmat,k)
W = (X.T*(wei*X)).I*(X.T*(wei*ymat.T))
return W
def localWeightRegression(xmat, ymat, k):
m,n = np.shape(xmat)
ypred = np.zeros(m)
for i in range(m):
ypred[i] = xmat[i]*localWeight(xmat[i],xmat,ymat,k)
return ypred
data = pd.read_csv('10-dataset.csv')
8
bill = np.array(data.total_bill)
tip = np.array(data.tip)
mbill = np.mat(bill)
mtip = np.mat(tip)
m= np.shape(mbill)[1]
one = np.mat(np1.ones(m))
X = np.hstack((one.T,mbill.T))
ypred = localWeightRegression(X,mtip,0.5)
SortIndex = X[:,1].argsort(0)
xsort = X[SortIndex][:,0]
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
ax.scatter(bill,tip, color='green')
ax.plot(xsort[:,1],ypred[SortIndex], color = 'red', linewidth=5)
plt.xlabel('Total bill')
plt.ylabel('Tip')
plt.show();
Output
Result
Hence the above non-parametric locally weighted regression algorithm in order to fit
data points of the program has been executed successfully.
****************************************************************
9
Ex no: 6 Write a program to demonstrate the working of the decision tree
based algorithm.
Aim
To write the AI & ML program to demonstrate the working of the decision tree based
algorithm.
Algorithm
Step1: Select the best attribute using Attribute Selection Measures (ASM) to split the
records.
Step2: Make that attribute a decision node and breaks the dataset into smaller subsets.
Step3: Start tree building by repeating this process recursively for each child until one
of the conditions will match:
All the tuples belong to the same attribute value.
There are no more remaining attributes.
There are no more instances.
Step4: Load the dataset.
Step5: Converting the data to a panda’s data frame.
Step6: Creating a separate column for the target variable of iris dataset.
Step7: Replacing the categories of target variable with the actual names of the species.
Step8: Separating the independent dependent variables of the dataset.
Step9: Splitting the dataset into training and testing datasets
Step10: Importing the Decision Tree classifier class from sklearn
Step11: Creating an instance of the classifier class
Step12: Fitting the training dataset to the model
Step13: Plotting the Decision Tree
Step14: Plotting heatmap
Step15: Stop the process.
Source code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn import metrics
import seaborn as sns
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn import tree
iris = load_iris()
data = pd.DataFrame(data = iris.data, columns = iris.feature_names)
data['Species'] = iris.target
target = np.unique(iris.target)
10
target_n = np.unique(iris.target_names)
target_dict = dict(zip(target, target_n))
data['Species'] = data['Species'].replace(target_dict)
x = data.drop(columns = "Species")
y = data["Species"]
names_features = x.columns
target_labels = y.unique()
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size = 0.3, random_state = 93)
from sklearn.tree import DecisionTreeClassifier
dtc = DecisionTreeClassifier(max_depth = 3, random_state = 93)
dtc.fit(x_train, y_train)
plt.figure(figsize = (30, 10), facecolor = 'b')
Tree = tree.plot_tree(dtc, feature_names = names_features, class_names = target_labels,
rounded = True, filled = True, fontsize = 14)
plt.show()
y_pred = dtc.predict(x_test)
confusion_matrix = metrics.confusion_matrix(y_test, y_pred)
matrix = pd.DataFrame(confusion_matrix)
axis = plt.axes()
sns.set(font_scale = 1.3)
plt.figure(figsize = (10,7))
sns.heatmap(matrix, annot = True, fmt = "g", ax = axis, cmap = "magma")
axis.set_title('Confusion Matrix')
axis.set_xlabel("Predicted Values", fontsize = 10)
axis.set_xticklabels([''] + target_labels)
axis.set_ylabel( "True Labels", fontsize = 10)
axis.set_yticklabels(list(target_labels), rotation = 0)
plt.show()
Output
11
Result
Hence the decision tree based algorithm has been executed successfully.
****************************************************************
12
Build an artificial neural network by implementing the back
Ex no: 7
propagation algorithm and test the same using appropriate data
sets.
Aim
To write the AI & ML program to Build an artificial neural network by implementing
the back propagation algorithm and test the same using appropriate data sets.
Algorithm
Step1: Create a feed-forward network with ni inputs, nhidden hidden units, and nout
output units.
Step2: Initialize all network weights to small random numbers
Step3: Until the termination condition is met,
Step4: For each (→𝑥→,→ ), in training examples, Do
Propagate the input forward through the network:
Input the instance →𝑥→, to the network and compute the output ou of every unit u in
the network.
Propagate the errors backward through the network:
Training Examples:
Expected % in
Example Sleep Study
Exams
1 2 9 92
2 1 5 86
3 3 6 89
13
Normalize the input
Expected %in
Example Sleep Study
Exams
1 2/3 = 0.66666667 9/9 = 1 0.92
2 1/3 = 0.33333333 5/9 = 0.55555556 0.86
3 3/3 = 1 6/9 = 0.66666667 0.89
Source code
import numpy as np
X = np.array(([2, 9], [1, 5], [3, 6]), dtype=float)
y = np.array(([92], [86], [89]), dtype=float)
X = X/np.amax(X,axis=0) # maximum of X array longitudinally y = y/100
return 1/(1 + np.exp(-x))
def derivatives_sigmoid(x):
return x * (1 - x)
epoch=5000
lr=0.1
inputlayer_neurons = 2
hiddenlayer_neurons = 3
output_neurons = 1
wh=np.random.uniform(size=(inputlayer_neurons,hiddenlayer_neur ons))
bh=np.random.uniform(size=(1,hiddenlayer_neurons))
wout=np.random.uniform(size=(hiddenlayer_neurons,output_neuron s))
bout=np.random.uniform(size=(1,output_neurons))
dim x*y for i in range(epoch):
hinp1=np.dot(X,wh)
hinp=hinp1 + bh
hlayer_act = sigmoid(hinp)
outinp1=np.dot(hlayer_act,wout)
outinp= outinp1+ bout
output = sigmoid(outinp)
EO = y-output
outgrad = derivatives_sigmoid(output)
d_output = EO* outgrad
EH = d_output.dot(wout.T)
hiddengrad = derivatives_sigmoid(hlayer_act)
d_hiddenlayer = EH * hiddengrad
14
wout += hlayer_act.T.dot(d_output) *lr
wh += X.T.dot(d_hiddenlayer) *lr
print("Input: \n" + str(X))
print("Actual Output: \n" + str(y))
print("Predicted Output: \n" ,output)
Output
Input:
[[0.66666667 1. ]
[0.33333333 0.55555556]
[1. 0.66666667]]
Actual Output: [[0.92]
[0.86]
[0.89]]
Predicted Output: [[0.89726759]
[0.87196896]
[0.9000671]]
Result
Hence the artificial neural network by implementing the back propagation algorithm
and test the same using appropriate data sets program has been executed successfully.
****************************************************************
15
Ex no: 8
Write a program to implement the naïve Bayesian classifier.
Aim
To write the AI & ML program to implement the naïve Bayesian classifier
Algorithm
Step 1: Import basic libraries.
Step 2: Importing the dataset.
Step 3: Data preprocessing.
Step 4: Splitting the dataset into the Training set and Test set
Step 5: Training the model.
Step 6: Testing and evaluation of the model.
Step 7: Visualizing the test set result.
Source code
Data Pre-processing
Importing the libraries
import numpy as nm
import matplotlib.pyplot as mtp
import pandas as pd
dataset = pd.read_csv('user_data.csv')
x = dataset.iloc[:, [2, 3]].values
y = dataset.iloc[:, 4].values
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size = 0.25, random_state = 0)
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
x_train = sc.fit_transform(x_train)
x_test = sc.transform(x_test)
Test set results
from matplotlib.colors import ListedColormap
x_set, y_set = x_train, y_train
X1, X2 = nm.meshgrid(nm.arange(start = x_set[:, 0].min() - 1, stop = x_set[:, 0].max() + 1,
step = 0.01),
nm.arange(start = x_set[:, 1].min() - 1, stop = x_set[:, 1].max() + 1, step = 0.01))
mtp.contourf(X1, X2, classifier.predict(nm.array([X1.ravel(),
X2.ravel()]).T).reshape(X1.shape),
alpha = 0.75, cmap = ListedColormap(('purple', 'green')))
mtp.xlim(X1.min(), X1.max())
mtp.ylim(X2.min(), X2.max())
for i, j in enumerate(nm.unique(y_set)):
16
mtp.scatter(x_set[y_set == j, 0], x_set[y_set == j, 1],
c = ListedColormap(('purple', 'green'))(i), label = j)
mtp.title('Naive Bayes (Training set)')
mtp.xlabel('Age')
mtp.ylabel('Estimated Salary')
mtp.legend()
mtp.show()
Output
We have loaded the dataset into our program using "dataset =
pd.read_csv('user_data.csv'). The loaded dataset is divided into training and test set, and
then we have scaled the feature variable.
17
Output shows the result for prediction vector y_pred and real vector y_test.
Result
Hence the the naïve Bayesian classifier data Preprocessing & test result program has
been executed successfully.
****************************************************************
18
Ex no: 9
Implementing neural network using self-organizing maps.
Aim
To write the neural network using self-organizing maps.
Algorithm
Step 1: Initialize the weights wij random value may be assumed. Initialize the
learning rate α.
Step 2: Calculate squared Euclidean distance.
D(j) = Σ (wij – xi)^2 where i=1 to n and j=1 to m
Step 3: Find index J, when D(j) is minimum that will be considered as winning index.
Step 4: For each j within a specific neighborhood of j and for all i, calculate the new
weight.
wij(new)=wij(old) + α[xi – wij(old)]
Step 5: Update the learning rule by using : α(t+1) = 0.5 * t
Step 6: Test the Stopping Condition.
Source code
import math
class SOM:
def winner(self, weights, sample):
D0 = 0
D1 = 0
for i in range(len(sample)):
D0 = D0 + math.pow((sample[i] - weights[0][i]), 2)
D1 = D1 + math.pow((sample[i] - weights[1][i]), 2)
if D0 < D1:
return 0
else:
return 1
def update(self, weights, sample, J, alpha):
for i in range(len(weights[0])):
weights[J][i] = weights[J][i] + alpha * (sample[i] - weights[J][i])
return weights
def main():
T = [[1, 1, 0, 0], [0, 0, 0, 1], [1, 0, 0, 0], [0, 0, 1, 1]]
m, n = len(T), len(T[0])
weights = [[0.2, 0.6, 0.5, 0.9], [0.8, 0.4, 0.7, 0.3]]
ob = SOM()
epochs = 3
alpha = 0.5
for i in range(epochs):
19
for j in range(m):
sample = T[j]
J = ob.winner(weights, sample)
weights = ob.update(weights, sample, J, alpha)
s = [0, 0, 0, 1]
J = ob.winner(weights, s)
print("Test Sample s belongs to Cluster : ", J)
print("Trained weights : ", weights)
if name == " main ":
main()
Output
Test Sample s belongs to Cluster: 0
Trained weights: [[0.6000000000000001, 0.8, 0.5, 0.9],
[0.3333984375, 0.0666015625, 0.7, 0.3]]
Result
Hence the neural network using self-organizing maps program has been executed
successfully.
****************************************************************
20
Ex no: 10
Implementing K-Means algorithm to cluster a set of data.
Aim
To write the K-Means algorithm to cluster a set of data.
Algorithm
Step 1: Select the value of K to decide the number of clusters (n_clusters) to be
formed.
Step 2: Select random K points that will act as cluster centroids (cluster_centers).
Step 3: Assign each data point, based on their distance from the randomly selected
points (Centroid), to the nearest/closest centroid, which will form the predefined
clusters.
Step 4: Place a new centroid of each cluster.
Step 5: Repeat step no.3, which reassigns each datapoint to the new closest centroid
of each cluster.
Step 6: If any reassignment occurs, then go to step 4; else, go to step 7.
Step 7: Executing the Program.
Source code
(Matplot)
import matplotlib.pyplot as plt
x = [4, 5, 10, 4, 3, 11, 14 , 6, 10, 12]
y = [21, 19, 24, 17, 16, 25, 24, 22, 21, 21]
plt.scatter(x, y)
plt.show()
(Elbow method)
from sklearn.cluster import KMeans
data = list(zip(x, y))
inertias = []
for i in range(1,11):
kmeans = KMeans(n_clusters=i)
kmeans.fit(data)
inertias.append(kmeans.inertia_)
plt.plot(range(1,11), inertias, marker='o')
plt.title('Elbow method')
plt.xlabel('Number of clusters')
plt.ylabel('Inertia')
plt.show()
21
Output
(i) matplot (ii) Elbow method
Result
Hence the the K-Means algorithm to cluster a set of data program has been executed
successfully.
****************************************************************
22
Ex no: 11
Implementing hierarchical clustering algorithm.
Aim
To write the hierarchical clustering algorithm.
Algorithm
Step 1: Load and Prepare the Data: Load a dataset and preprocess it if necessary.
Step2: Compute the Distance Matrix: Calculate the distances between the data points.
Step 3: Perform Hierarchical Clustering: Use a clustering algorithm to form a
hierarchy of clusters.
Step 4: Visualize the Dendrogram: Plot a dendrogram to visualize the clustering
process.
Source code
import numpy as np
from sklearn.datasets import load_iris
from sklearn.preprocessing import StandardScaler
from scipy.spatial.distance import pdist, squareform
from scipy.cluster.hierarchy import linkage, fcluster, dendrogram
import matplotlib.pyplot as plt
iris = load_iris()
X = iris.data
scaler = StandardScaler()
X_scaled = scaler.fit_transform(X)
distance_matrix = pdist(X_scaled, metric='euclidean')
distance_matrix = squareform(distance_matrix)
Z = linkage(X_scaled, method='ward')
clusters = fcluster(Z, t=3, criterion='maxclust')
plt.figure(figsize=(10, 7))
dendrogram(Z, labels=iris.target_names[iris.target])
plt.title('Hierarchical Clustering Dendrogram')
plt.xlabel('Sample Index')
plt.ylabel('Distance')
plt.show()
print(f'Cluster assignments: {clusters}')
23
Output
Result
Hence the hierarchical clustering algorithm program has been executedsuccessfully.
****************************************************************
24