We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
Sir C R Reddy College of Engineering Department of CSE- Al & DS
EXPERIMENT-2
Aim: For a given set of training data examples stored in a .CSV file, implement and
demonstrate the Candidate Elimination algorithm to output a description of the set of
all hypotheses consistent with the training examples.
‘Dienrum:
import numpy as np
import pandas as pd
data = pd.read_csv('E:/[Link])
concepts = [Link]([Link][:,0:-1))
target = np array(data iloc{:,-1))
def learn(concepts, target):
specific_h = concepis[0].copy()
print( "Initialization of specific_h and general_h:")
print(specific_h)
general_h = [["2" for i in range(len(specific_h))] for i in range(len(specific_h))]
print(general_h)
for i, h in enumerate(concepts)
if target[i] == "Yes":
for x inrange(len(specific_h)):
Fic_h[x]:
if h[x] != specif
specific_h{x]
general_h(x][x]
iftarget{i] == "No":
for x inrange(len(specific_h)):
if h[x] != specific_h[x]
gencral_h{x][x] = specific_h[x]
else:
general_h{x][x] = "2"
print("Steps of Candidate Elimination Algorithm:",i+1)
print("Specific_h ",i+1,"\n")
ic_h)
print(speci
Machine Learning Using Python Lab 10Sir C R Reddy College of Engineering Department of CSE- Al & DS
print("Gener
print(general_h)
"itl, "\n")
indices
PROM
for iin indices:
general_hremove(("?
s_final, sep="\n")
print("Final General_h:", g final, sep="\n")
Result:
Initialization of specific_h and general_h:
(Sunny "Warn ‘Normal’ Strong’ ‘Warm’ Same’)
MD CMP MMM LD"
207 ‘I
Steps of Candidate Elimination Algorithm: 4
Specific_h 4
(67, 77"
"TRE,
['Sunny’ ‘Warm’
General_h 4
‘Strong’
[[Sunny', "7, °7,°7, 2,2, (2, ‘Warm’, "2", "7,
£2
1.07%
TL,
'Y'Strong' "?''?"]
"), [% ‘Warm, "?,
Machine Learning Using Python Lab 1Sir C R Reddy College of Engineering Department of CSE- Al & DS
EXPERIMENT-§
‘Aim: Write a program to implement k-Nearest Neighbor algorithm to classify the iris
data set. Print both correct and wrong predictions.
(Ceeerum:
import numpy as np
import pandas as pd
from [Link] import KNeighborsClassifier
from sklearn.model_selection import train_test_split
from skleam import metrics
'sepal-length,, ‘sepal-width', ‘petal-length’, ‘petal-width, ‘Class']
# Read dataset to pandas dataframe
dataset = pd.read_csv("E:/GP/[Link]", names=names)
X = datasi 1]
y =datasetilocf:, -1]
print([Link]()
Xtrain, Xtest. ytrain, ytest = train_test_split(X. y, test_size=0.10)
classifier = KNeighborsClassifier(n_neighbors=5).fit(Xtrain, ytrain)
ypred = classifier predict(Xtest)
i=0
print("\n")
print ('%-25s %-25s %-25s'% (Original Label’, Predicted Label’, ‘Correct/Wrong’))
for label in ytest:
print ('%-25s 9%-25s' % (label, ypredfi]), end="")
if (label == ypred{i):
print ('96-25s'% (‘Correct’)
else:
print (' %-25s' % (‘Wrong’))
itl
Machine Learning Using Python Lab 31Sir C R Reddy College of Engineering
Department of CSE- Al & DS
print("\nConfusion Matrix:\n",metrics.confusion_matrix(ytest, ypred))
print("\nClassification Report:\n" metrics.classification_report(ytest, ypred))
print(Accuracy of the classifer is %0.2f % [Link]
y_score(ytest,ypred))
sepal-length sepal-width petal-length petal-width
Result:
0 5.1
1 49
2 47
3 46
4 5.0
Original Label
Iris-setosa
Iris-versicolor
Inis-virginica
Iris-versicolor
Iris-versicolor
Iris-setosa
Iris-setosa
Inis-virginica
Iris-versicolor
Iris-versicolor
Inis-virginica
Inis-virginica
Iris-versicolor
35
3.0
32
31
3.6
Confusion Matrix:
([50 0}
fos)
Macl
14
14
13
15
14
Predicted Label
Iris-setosa
Iris-versicolor
Iris-virginica
Iris-versicolor
Iris-versicolor
Iris-setosa
setosa
is-virginica
Iris-virginica
Iris-virginica
Iris-versicolor
Learning Using Python Lab
02
02
02
02
02
Correct Wrong
Correct
Correct
Correct
Correct
Correct
Comect
Comect
Correct
Correct
Wrong
Correct
Correct
Correct
Comect
Conect
32