Ai&Ml Lab Manual (1) Cse II
Ai&Ml Lab Manual (1) Cse II
LAB MANUAL
Regulation 2021
Year/Semester::II/IV
PREPARED BY::
D. RAJ THILAK AP\CSE
NAME : ………………………………………
REG.NO: ………………………………………
YEAR : ....................................................
SEM : ………………………………………
1
S.NO DATE EXPERIMENT NAME SIGN
2
Ex No::1 Implementation of Uninformed Search Algorithms (BFS, DFS)
Date::
Aim::
To implemention of uninformed search algorithms(BFS,DFS).
Algorithm:
Step 1. Start
Step 2. Put any one of the graph’s vertices at the back of the queue.
Step 3. Take the front item of the queue and add it to the visited list.
Step 4. Create a list of that vertex's adjacent nodes. Add those which are not within the visited list to
the rear of the queue.
Step 5. Continue steps 3 and 4 till the queue is empty.
Step 6. Stop
Program::
graph = {
'5' : ['3','7'],
'3' : ['2', '4'],
'7' : ['8'],
'2' : [],
'4' : ['8'],
'8' : []
}
visited = [] # List for visited nodes.
queue = [] #Initialize a queue
def bfs(visited, graph, node): #function for BFS
visited.append(node)
queue.append(node)
while queue: # Creating loop to visit each node
m = queue.pop(0)
print (m, end = " ")
for neighbour in graph[m]:
if neighbour not in visited:
visited.append(neighbour)
queue.append(neighbour)
# Driver Code
print("Following is the Breadth-First Search")
bfs(visited, graph, '5') # function calling
3
Output::
Following is the Breadth-First Search
537248
DFS Algorithm::
Step 1.Start
Step 2.Put any one of the graph's vertex on top of the stack.
Step 3.After that take the top item of the stack and add it to the visited list of the vertex.
Step 4.Next, create a list of that adjacent node of the vertex. Add the ones which aren't in the visited list
of vertexes to the top of the stack.
Step 5.Repeat steps 3 and 4 until the stack is empty.
Step 6.Stop
Program::
# Using a Python dictionary to act as an adjacency list
graph = {
‘5’:[‘3’,’7’],
‘3’: [‘2’,’4’],
‘7’: [‘8’],
‘2’:[],
‘4’:[‘8’],
‘8’:[]
}
visited = set() # Set to keep track of visited nodes of graph.
# Driver Code
print("Following is the Depth-First Search")
4
dfs(visited, graph, '5')
Output::
Following is the Depth-First Search
532487
Result::
Thus the Python program to implement Breadth First Search (BFS) and Depth First Search (DFS)
was developed successfully.
5
Ex.No::2 Implementation of Informed Search Algorithms(A*, memory-bounded A*)
Date::
Aim::
To write to python program of implementation of informed search algorithms(A*,memory-
bounded A*).
Algorithm of A* search::
Step 1: Create a priority queue and push the starting node onto the queue.Initialize minimum value
(min_index) to location 0.
Step 2: Create a set to store the visited nodes.
Step 3: Repeat the following steps until the queue is empty:
3.1: Pop the node with the lowest cost + heuristic from the queue.
3.2: If the current node is the goal, return the path to the goal.
3.3: If the current node has already been visited, skip it.
3.4: Mark the current node as visited.
3.5: Expand the current node and add its neighbors to the queue.
Step 4: If the queue is empty and the goal has not been found, return None (no path found).
Step 5: Stop
Program::
def aStarAlgo(start_node, stop_node):
open_set = set(start_node)
closed_set = set()
parents[start_node] = start_node
6
for v in open_set:
n=v
pass
else:
#nodes 'm' not in first and last set are added to first
open_set.add(m)
parents[m] = n
g[m] = g[n] + weight #for each node m,compare its distance from start i.e g(m) to the
else:
#update g(m)
#change parent of m to n
parents[m] = n
if m in closed_set:
closed_set.remove(m)
open_set.add(m)
if n == None:
return None # if the current node is the stop_node # then we begin reconstructin the path from it to the
start_node
if n == stop_node:
path = []
while parents[n] != n:
7
path.append(n)
n = parents[n]
path.append(start_node)
path.reverse()
open_set.remove(n)
closed_set.add(n)
return None #define fuction to return neighbor and its distance #from the passed node
def get_neighbors(v):
if v in Graph_nodes:
return Graph_nodes[v]
else:
return None #for simplicity we ll consider heuristic distances given #and this function
returns heuristic distance for all nodes
def heuristic(n):
H_dist = { 'A': 11, 'B': 6, 'C': 5, 'D': 7, 'E': 3, 'F': 6, 'G': 5, 'H': 3, 'I': 1, 'J': 0 }
} aStarAlgo('A', 'J')
Output::
Path found: ['A', 'F', 'G', 'I', 'J']
8
Algorithm of memory bounded A*::
Step 1: Create a priority queue and push the starting node onto the queue.
Step 2: Create a set to store the visited nodes.
Step 3: Set a counter to keep track of the number of nodes expanded.
Step 4: Repeat the following steps until the queue is empty or the node counter exceeds the max_nodes:
4.1: Pop the node with the lowest cost + heuristic from the queue.
4.2: If the current node is the goal, return the path to the goal.
4.3: If the current node has already been visited, skip it.
4.4 : Mark the current node as visited. : Increment the node counter.
4.5 : Expand the current node and add its neighbors to the queue.
Step 5: If the queue is empty and the goal has not been found, return None (no path found).
Step 6: Stop
Program::
function SMA-star(problem): path
queue: set of nodes, ordered by f-cost;
begin
queue.insert(problem.root-node);
while True do begin
if queuc.empty() then return failure; //there is no solution that fits in the given
memory
node := queue.begin(); / min-f-cost-node
if problem. is-goal(node) then return success;
s := next-successor(node)
if tproblem.is-goal(s) && depth(s) == max_depth then
f(s) = inf} :
// there is no memory left to go past s, so the entire path is useless
else :
f(s) := max(f(node), g(s) + h(s));
//f-value of the successor is the maximum of
//f-value of the parent and
//heuristic of the successor + path length to the successor
endif |
if no more successors then
update f-cost of node and those of its ancestors if needed
if node.successors _queue then queue.remove(node);
// all children have already been added to the queue via a shorter way
if memory is full then begin
badNode := shallowest node with highest f-cost;
9
for parent in badNode.parents do begin
parent.successors.remove(badNode);
if needed then queue.insert(parent);
endfor
endif
queue.insert(s);
endwhile
end
Result::
Thus the python program of implementation of informed search algorithms(A*,memory-
bounded A*)was developed and the output was verified successfully.
10
Ex No::3 Implement Naive Bayes models
Date::
Aim::
To implementation natve bayes models.
Algorithm::
Step 1. Load the libraries: import the required libraries such as pandas, numpy, and sklearn.
Step 2. Load the data into a pandas dataframe.
Step 3. Clean and preprocess the data as necessary. For example, you can handle missing values,
convert categorical variables into numerical variables, and normalize the data.
Step 4. Split the data into training and test sets using the train_test_split function from scikit-learn.
Step 5. Train the Gaussian Naive Bayes model using the training data.
Step 6. Evaluate the performance of the model using the test data and the accuracy_score function from
scikit-learn.
Step 7. Finally, you can use the trained model to make predictions on new data.
Program::
#Import scikit-learn dataset library
from sklearn import datasets #Load
dataset
wine = datasets.load_wine()
# print the names of the 13 features
print("Features: ", wine.feature_names)
# print the label type of wine(class_0, class_1, class_2)
print("Labels: ", wine.target_names)
# print data(feature)shape
wine.data.shape
# print the wine data features (top 5 records)
print(wine.data[0:5])
# print the wine labels (0:Class_0, 1:class_2, 2:class_2)
Print(wine.target)
# Import train_test_split function
from sklearn.model_selection import train_test_split#
Split dataset into training set and test set
X_train, X_test, y_train, y_test = train_test_split (wine.data, wine.target, test_size=0.3,
random_state=109)
# 70% training and 30% test
#Import Gaussian Naive Bayes model
11
from sklearn.naive_bayes import GaussianNB
#Create a Gaussian Classifier
gnb = GaussianNB()
#Train the model using the training sets
gnb.fit(X_train, y_train)
#Predict the response for test dataset
y_pred = gnb.predict(X_test)
# Evaluating model
#Import scikit-learn metrics module for accuracy calculation
from sklearn import metrics
# Model Accuracy
print("Accuracy:",metrics.accuracy_score(y_test, y_pred))
Output:
12
Display the labels in the dataset:
[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0000000000000000000000111111111111111
1111111111111111111111111111111111111
1111111111111111111222222222222222222
2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2]
Model Accuracy:
Accuracy: 0.907407407407407
Result::
Thus the Python program for implementing Naïve Bayes model was developed and the output
was verified successfully.
13
Ex No:4 Implement Bayesian Networks
Date::
Aim::
To implementation Bayesian networks.
Algorithm::
Step 1. Start by importing the required libraries such as math and pomegranate.
Step 2. Define the discrete probability distribution for the guest's initial choice of door
Step 3. Define the discrete probability distribution for the prize door
Step 4. Define the conditional probability table for the door that Monty picks based on the guest's
choice and the prize door.
Step 5. Create State objects for the guest, prize, and Monty's choice .
Step 6. Create a Bayesian Network object and add the states and edges between them.
Step 7. Bake the network to prepare for inference .
Step 8. Use the predict_proba method to calculate the beliefs for a given set of evidence.
Step 9. Display the beliefs for each state as a string.
Step 10. Stop
Program::
from pgmpy.models import BayesianNetwork
from pgmpy.factors.discrete import TabularCPD
import networkx as nx
import pylab as plt
# Defining Bayesian Structure
model = BayesianNetwork([('Guest', 'Host'), ('Price', 'Host')])# Defining the CPDs:
cpd_guest = TabularCPD('Guest', 3, [[0.33], [0.33], [0.33]])
cpd_price = TabularCPD('Price', 3, [[0.33], [0.33], [0.33]])
cpd_host = TabularCPD('Host', 3, [[0, 0, 0, 0, 0.5, 1, 0, 1, 0.5],
[0.5, 0, 1, 0, 0, 0, 1, 0, 0.5],
[0.5, 1, 0, 1, 0.5, 0, 0, 0, 0]],
evidence=['Guest', 'Price'], evidence_card=[3, 3])
# Associating the CPDs with the network structure.
model.add_cpds(cpd_guest, cpd_price, cpd_host)
model.check_model()
Output:
True
Result::
Thus, the Python program for implementing Bayesian Networks was successfully developed and
the output was verified.
14
Ex No::5 Build Regression Models
Date::
Aim::
To implementation build regression models.
Algorithm::
Step 1. Import necessary libraries: numpy, pandas, matplotlib.pyplot, LinearRegression,
mean_squared_error, and r2_score.
Step 2. Create a numpy array for waist and weight values and store them in separate variables.
Step 3. Create a pandas DataFrame with waist and weight columns using the numpy arrays.
Step 4. Extract input (X) and output (y) variables from the DataFrame.
Step 5. Create an instance of LinearRegression model.
Step 6. Fit the LinearRegression model to the input and output variables.
Step 7. Create a new DataFrame with a single value of waist.
Step 8. Use the predict() method of the LinearRegression model to predict the weight for the new waist
value.
Step 9. Calculate the mean squared error and R-squared values using mean_squared_error() and
r2_score() functions respectively.
Step 10. Plot the actual and predicted values using matplotlib.pyplot.scatter() and
matplotlib.pyplot.plot() functions.
Program::
import pandas as pd
import statsmodels.api as sm
15
x2 = sm.add_constant(x2) #fit linear regression model
print(model3.summary())
Output::
a. Correlation Matrix
16
c. Bivariate Analysis of Age-Pregnancies features
Result::
Thus the Python program to build a simple linear Regression model was developed successfully.
17
Ex No::6 Build Decision Trees and Random Forests
Date::
Aim::
To build decision tree and random forests.
Algorithm::
Step 1. Import necessary libraries: numpy, matplotlib, seaborn, pandas, train_test_split, LabelEncoder,
DecisionTreeClassifier, plot_tree, and RandomForestClassifier.
Step 2. Read the data from 'flowers.csv' into a pandas DataFrame.
Step 3. Extract the features into an array X, and the target variable into an array y.
Step 4. Encode the target variable using the LabelEncoder.
Step 5. Split the data into training and testing sets using train_test_split function.
Step 6. Create a DecisionTreeClassifier object, fit the model to the training data, and visualize the
decision tree using plot_tree.
Step 7. Create a RandomForestClassifier object with 100 estimators, fit the model to the training data,
and visualize the random forest by displaying 6 trees.
Step 8. Print the accuracy of the decision tree and random forest models using the score method on the
test data.
Program::
import pandas
from sklearn import tree
from sklearn.tree import DecisionTreeClassifier df = pandas.read_csv("data.csv") print("Input:")
print(df.head(5))
d = {'UK':0,'USA':1,'N':2}
df['Nationality'] = df['Nationality'].map(d) d = {'YES':1, 'NO':0}
df['Go'] = df['Go'].map(d) print("Transformed Data:") print(df.head(5))
features = ['Age','Experience','Rank','Nationality'] X = df[features]
y = df['Go']
dtree = DecisionTreeClassifier() dtree = dtree.fit(X,y) print(dtree.predict([[40,10,6,1]])) print("[1]means
'Go'")
print("[0]means 'NO'")
DATA SET : (data.csv)
Age Experience Rank Nationality Go
36 10 9 UK NO
42 12 4 USA NO
23 4 6 N NO
52 4 4 USA NO
18
43 21 8 USA YES
Output:
Random Forests::
# Pandas is used for data manipulationimport pandas as pd
# Read in data and display first 5 rowsfeatures = pd.read_csv('temps.csv') features.head(5)
import numpy as np
# Labels are the values we want to predictlabels = np.array(features['actual'])
# Remove the labels from the features# axis 1 refers to the columns
features= features.drop('actual', axis = 1)# Saving feature names for later use feature_list =
list(features.columns)
# Convert to numpy array features = np.array(features)
# Using Skicit-learn to split data into training and testing setsfrom sklearn.model_selection import
train_test_split
# Split the data into training and testing sets
train_features, test_features, train_labels, test_labels = train_test_split(features, labels, test_size =
0.25, random_state = 42)
19
# Import the model we are using
from sklearn.ensemble import RandomForestRegressor# Limit depth of tree to 3 levels
rf_small = RandomForestRegressor(n_estimators=10, max_depth = 3)# Train the model on training data
rf_small.fit(train_features, train_labels)
Output::
20
RandomForestRegressor(max_depth=3, n_estimators=10)
Result::
Thus the Python program to build decision tree and random forest was developed successfully.
21
Ex No::7 Build SVM Models
Date::
Aim::
To write a python program build SVM models.
Algorithm::
Step 1.Import the necessary libraries (matplotlib.pyplot, numpy, and svm from sklearn).
Step 2.Define the features (X) and labels (y) for the fruit dataset.
Step 3.Create an SVM classifier with a linear kernel using svm.SVC(kernel='linear').
Step 4.Train the classifier on the fruit data using clf.fit(X, y).
Step 5.Plot the fruits and decision boundary using plt.scatter(X[:, 0], X[:, 1], c=colors), where colors is
a list of colors assigned to each fruit based on its label.
Step 6.Create a meshgrid to evaluate the decision function using np.meshgrid(np.linspace(xlim[0],
xlim[1], 100), np.linspace(ylim[0], ylim[1], 100)).
Step 7.Use the decision function to create a contour plot of the decision boundary and margins using
ax.contour(xx, yy, Z, colors='k', levels=[-1, 0, 1], alpha=0.5, linestyles=['--', '-', '--']).
Step 8.Show the plot using plt.show().
Program::
import pandas
from sklearn.model_selection import train_test_splitfrom sklearn.svm
import SVC
from sklearn.metrics import confusion_matrix
data = pandas.read_csv("vector.csv")print("Input: ")
print(data.head(10))
training_set.iloc[:,0:2].values
y_train = training_set.iloc[:,2].values
x_test = test_set.iloc[:,0:2].valuesy_test = test_set.iloc[:,2].values
cm = confusion_matrix(y_test, y_pred)
accuracy = float(cm.diagonal().sum()/len(y_test))
print("\nAccuracy of SVM for the given dataset: ", accuracy)
22
Output:
Result::
Thus, the Python program to build an SVM model was developed, and the output was successfully
verified.
23
Ex No::8 Implement Ensembling Techniques
Date::
Aim::
To implementation of ensembling techniques.
Algorithm::
❖ Averaging method:
▪ It is mainly used for regression problems.
▪ The method consists of building multiple models independently and returning the
average of the prediction of all the models.
▪ In general, the combined output is better than an individual output because
variance is reduced.
▪ In the below example, three regression models (linear regression, xgboost, and
random forest) are trained and their predictions are averaged.
▪ The final prediction output is pred_final.
Program::
#Implement VotingClassifier
#Importing necessary libraries:
from sklearn.model_selection import train_test_splitfrom sklearn.datasets import make_moons
from sklearn.linear_model import LogisticRegressionfrom sklearn.svm import SVC
from sklearn.ensemble import RandomForestClassifierfrom sklearn.ensemble import VotingClassifier
from sklearn.metrics import accuracy_score
#Creating dataset:
X, y = make_moons(n_samples=500, noise=0.30) X_train, X_test, y_train, y_test = train_test_split(X,
y)
24
#Implement BaggingClassifier
from sklearn.ensemble import BaggingClassifier from sklearn.tree import DecisionTreeClassifier from
sklearn.metrics import accuracy_score
#Implement AdaBoostClassifier
from sklearn.ensemble import AdaBoostClassifier
adaboost_clf = AdaBoostClassifier(
DecisionTreeClassifier(max_depth=1), n_estimators=200,
algorithm="SAMME.R", learning_rate=0.5, random_state=42)
Output:
#For VotingClassifier
LogisticRegression 0.848
RandomForestClassifier 0.88
SVC 0.896
VotingClassifier 0.896
#prediction using test data
y_pred = bagging_clf.predict(X_test)print(accuracy_score(y_test, y_pred))
25
Result::
Thus, the Python program for implementing Ensembling Techniques was successfully developed and
the output was verified.
26
Ex No::9 Implement Clustering Algorithms
Date::
Aim::
To implementation clustering algorithms.
Algorithm::
1. Start the progam.
2. This means their runtime increases as the square of the number of examples denoted n,as in complexity
notation.
3. algorithms are not practical when the number of examples are in millions.
4. This course focuses on the k-means algorithm, which has a complexity of meaning that the algorithm scales
linearly with n.
5. Stop the program.
Program::
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.cluster import KMeans
data = {'x':
[25,34,22,27,33,33,31,22,35,34,67,54,57,43,50,57,59,52,65,47,49,48,35,33,44,45,38,43,51,4
6],
'y':
[79,51,53,78,59,74,73,57,69,75,51,32,40,47,53,36,35,58,59,50,25,20,14,12,20,5,29,27,8,7]
}
27
Output::
Result::
Thus, the Python program for implementing Clustering Algorithms was successfully developed and the
output was verified
28
Ex No:10 Implement EM for Bayesian Networks
Date::
Aim::
To implementation EM for Bayesian Networks.
Algorithm::
1. Start with a Bayesian network that has some variables that are not directly observed. For example,
suppose we have a Bayesian network with variables A, B, C, and D, where A and D are observed and B
and C are latent.
2. Initialize the parameters of the network. This includes the conditional probabilities for each variable
given its parents, as well as the prior probabilities for the root variables.
3. E-step: Compute the expected sufficient statistics for the latent variables. This involves computing
the posterior probability distribution over the latent variables given the observed data and the current
parameter estimates. This can be done using the forward-backward algorithm or the belief propagation
algorithm.
4. M-step: Update the parameter estimates using the expected sufficient statistics computed in step 3.
This involves maximizing the likelihood of the data with respect to the parameters of the network, given
the expected sufficient statistics.
5. Repeat steps 3-4 until convergence. Convergence can be measured by monitoring the change in the
log-likelihood of the data, or by monitoring the change in the parameter estimates.
Program::
import matplotlib.pyplot as plt
from sklearn import datasets
import sklearn.metrics as sm
import pandas as pd
import numpy as np
%matplotlib inline
# import some data to play with
iris = datasets.load_iris()
# GMM
from sklearn import preprocessing
scaler = preprocessing.StandardScaler()
scaler.fit(X)
xsa = scaler.transform(X)
xs = pd.DataFrame(xsa, columns = X.columns)
xs.sample(5)
from sklearn.mixture import GaussianMixture
gmm = GaussianMixture(n_components=3) gmm.fit(xs)
y_cluster_gmm = gmm.predict(xs)y_cluster_gmm
plt.subplot(1, 2, 1)
plt.scatter(X.Petal_Length, X.Petal_Width, c=colormap[y_cluster_gmm], s=40)
plt.title('GMM Classification')
# Accuracy
sm.accuracy_score(y, y_cluster_gmm)# Confusion Matrix
sm.confusion_matrix(y, y_cluster_gmm)
Output::
30
Result::
Thus the Python program to Implement EM for Bayesian Networks was developed successfully.
31
Ex No::11 Build simple NN models
Date::
Aim::
To write a python program of build simple NN models.
Algorithm::
1. Define the input and output data.
2. Choose the number of layers and neurons in each layer. This depends on the problem you are trying
to solve. 3. Define the activation function for each layer. Common choices are ReLU, sigmoid, and
tanh.
4. Initialize the weights and biases for each neuron in the network. This can be done randomly or using
a pre-trained model.
5. Define the loss function and optimizer to be used during training. The loss function measures how
well the model is doing, while the optimizer updates the weights and biases to minimize the loss.
6. Train the model on the input data using the defined loss function and optimizer. This involves
forward propagation to compute the output of the model, and backpropagation to compute the gradients
of the loss with respect to the weights and biases. The optimizer then updates the weights and biases
based on the gradients.
7. Evaluate the performance of the model on new data using metrics such as accuracy, precision, recall,
and F1 score.
Program::
# Import python libraries required in this example:
from keras.models import Sequential
from keras.layers import Dense, Activation
import numpy as np
# Use numpy arrays to store inputs (x) and outputs (y):
x = np.array([[0,0], [0,1], [1,0], [1,1]])
y = np.array([[0], [1], [1], [0]])
# Define the network model and its arguments.
# Set the number of neurons/nodes for each layer:
model = Sequential()
model.add(Dense(2,input_shape=(2,)))
model.add(Activation('sigmoid'))
model.add(Dense(1))
model.add(Activation('sigmoid'))
# Compile the model and calculate its accuracy:
model.compile(loss='mean_squared_error', optimizer='sgd', metrics=['accuracy'])
# Print a summary of the Keras model:
model.summary()
32
Output:
Model: "sequential"
Result::
Thus the Python program to build simple NN Models was developed successfully.
33
Ex No::12 Build Deep Learning NN Models
Date::
Aim::
To write a python program of build deep learning NN models.
Algorithm::
1. Import the necessary libraries, such as numpy and keras.
2. Load or generate your dataset. This can be done using numpy or any other data manipulation
library. 3. Preprocess your data by performing any necessary normalization, scaling, or other
transformations.
4. Define your neural network architecture using the Keras Sequential API. Add layers to the
model using the add() method, specifying the number of units, activation function, and input
dimensions for each layer.
5. Compile your model using the compile() method. Specify the loss function, optimizer, and
any evaluation metrics you want to use.
6. Train your model using the fit() method. Specify the training data, validation data, batch size,
and number of epochs.
7. Evaluate your model using the evaluate() method. This will give you the loss and accuracy
metrics on the test set.
8. Use your trained model to make predictions on new data using the predict() method.
Program::
import tensorflow as tf
from tensorflow import keras
fashiondata=tf.keras.datasets.
mnist
(x_train, y_train), (x_test,
y_test)=fashiondata.load_data()x_test.shape
x_train.shape
x_train, x_test=x_train/255, x_test/255
model = tf.keras.models.Sequential([tf.keras.layers.Flatten(input_shape=(28,28)),
tf.keras.layers.Dense(128,activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10,activation='softmax')])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',metrics=['accuracy'])model.fit(x_train, y_train,
epochs=5)
model.evaluate(x_test, y_test)
34
Output::
(10000, 28, 28)
(60000, 28, 28)
Epoch 1/5
1875/1875 [==============================] - 7s 3ms/step - loss: 0.0672 - accuracy:
0.97
93
Epoch 2/5
1875/1875 [==============================] - 6s 3ms/step - loss: 0.0578 - accuracy:
0.98
11
Epoch 3/5
1875/1875 [==============================] - 6s 3ms/step - loss: 0.0528 - accuracy:
0.98
25
Epoch 4/5
1875/1875 [==============================] - 6s 3ms/step - loss: 0.0500 - accuracy:
0.98
33
Epoch 5/5
1875/1875 [==============================] - 6s 3ms/step - loss: 0.0453 - accuracy:
0.98
44
313/313 [==============================] - 1s 3ms/step - loss: 0.0697 - accuracy:
0.9797[0.06965507566928864, 0.9797000288963318]
Result::
Thus the Python program to implement deep learning of NN Models was developed successfully.
35