CHEATSHEET
ine Learni
Mac
@ Algor
thms @
( Python and R Codes)
T
Supervised Learning
Decision Tree -Random Forest
“KAN Logistic Regression
pes
Unsupervised Learning
-Apriori algorithm - k-means
Hierarchical Clustering
Reinforcement Learning
- Markov Decision Process
-Q Learning
Python
Code
Code
‘import Library
#Import other necessary Libraries Like pandas,
finumpy.
from sklearn Inport Linear model
Load Train and Test datasets
Identify feature and response variable(s) and
values must be numeric and nunpy arrays
input_variables_values training datasets
ytralnetarget_variables_values_teaining datasets
x test=input_variables_values_test_datasets
xtra
Create Linear regression object
Linear = linear_model.LinearRegression()
Linear
Train the model using the training sets and
check score
Linear-fit(x_train, y_train)
Linear.score(x train, y_train)
=|
=
A
n
fy
=
a
o
ce
Equation coefficient and Intercept
print( ‘Coefficient: \n', linear.coef_)
print(‘Intercept: \n', Linear.intercept_)
Predict Output,
predicted= linear.predict(x test)
‘Load Train and Test datasets
Identify feature and response variable(s) and
values must be numeric and numpy arrays
x train © input_variables values training datasets
y.train <- tanget_variables_values training datasets
xLtest <- input_variables_values_test_datasets
x <= cbind(x_train,y_train)
Train the model using the training sets and
Heheck score
Linear <- In(y_train = ., data = x)
summary (Linear)
Predict Output
predicteds predict (Linear, x test)[4
reyes
egression
import Library
from sklearn-Linear_nodel import LogisticRegression
‘assumed you have, X (predictor) and ¥ (target)
for training data set and x_test(predictor)
fof test_dataset
create logistic regression object
Logistickegression()
‘Train the model using the training sets
and check score
nodel-fit(, y)
mmodel.score(x, y)
Equation coefficient and Intercept
print( ‘Coefficient: \n', model.coef_)
print("Intercept: \n", model.intercept_)
Horedict Output
predicted model.predict(x test)
rede)
x & chind(x train y train)
HIrain the model. using the training sets and check
logistic <- glaly train = ., data
x, fanily="binenial")|
‘summary (Logistic)
aPredict Output
predicted predict(logistic,x test)
Tree
is
=
cy
LH
import Library
import other necessary libraries like pandas, numy...
from sklearn inport tree
Assumed you have, X (predictor) and Y (target) for
training data set and x test (predictor) of
‘ttest_dataset
ficreate tree object
node
‘tree.DecisionTreeClassifien(criterion='gint")
fer classification, here you can change the
algorithm as gini on entropy (information gain) by
default it is gini
nedel = tree.Decisiontreekegressor() for
ttregression
‘Train the model using the training sets and check
model -F54(X, y)
sodel.score(x, y)
Predict output
predicted: model. predict (x test)
report Library
Library(rpart)
x & cbind(x train,y train)
fterow tree
fit < rpart(y_train ~ ., data =
summary (Fit)
WPredict Output
predicted predict(Fit,x test)
xymethod="class")SVM (Support Vector Machine)
#Import Library
from sklearn import svm
#Assumed you have, X (predictor) and Y (target) for
#training data set and x_test(predictor) of test_dataset
#Create SVM classification object
model = svm.sve()
fithere are various options associated
with it, this is simple for classification.
#Train the model using the training sets and check
score
model. Fit(X, y)
model.score(X, y)
#Predict Output
predicted= nodel.predict(x test
#Import Library
Library(e1@71)
X <- cbind(x_train,y train)
a#Fitting model
fit <-sum(y_train ~ ., data
summary (Fit)
#Predict Output
predicted= predict (fit,x_test)
Naive Bayes
Import Library
from sklearn.naive_bayes import GaussianNs
Assumed you have, K (predictor) and ¥ (target) for
#training data set and x_test(predictor) of test_dataset
Create SVM classification object model = GaussianNa()
there is other distribution for multinomial classes
Like Bernoulli Naive Bayes
#Train the model using the training sets and check
score
model. fit(X, y)
Predict Output
predicted= model. predict(x_test)
Import Library
Library(e1@71)
x <- cbind(x_train,y train)
Fitting model
fit <-naiveBayes(y_train ~ ., data = x)
sumnary(fit)
Predict Output
predicted= predict (fit.x_test)a
—
r=
=
Bi
2
=
SI
i
rs
5
3
Es
=
=
3
g
S
Fa
2
Fy
5
a
Inport Library
from sklearn import decomposition
‘Assumed you have training and test data set as train and
atest
Create PCA object pca= decomposition. PCA(n_conponents=k)
default value of k =nin(n_sanple, n features)
For Factor analysis
faz decomposition. FactorAnalysis()
‘Reduced the dimension of training dataset using PCA
train_reduced = pca.fit_transfora(train)
‘Reduced the dimension of test dataset
test_reduced = pca.transform(test)
#inport, Library
Library (stats)
pca < princomp(train, cor = TRUE)
train_reduced <- predict(pca,train)
test_reduced <- predict(pca,test)
Gradient Boosting & AdaBoost
#Inport Library
from sklearn.ensemble import GradientBoostingClassifier
#Assumed you have, X (predictor) and Y (target) for
straining data set and x _test(predictor) of test_dataset
#Create Gradient Boosting Classifier object
nodel= GradientBoostingClassifier(n_estinators=100, \
learning_rate=1.0, max_depth=1, random_stat
#Train the model using the training sets and check score
nodel Fit (x, y)
#Predict Output
predicted= model predict (x_test)
#Import Library
Library(caret)
x < cbind(x_train,y train)
#Fitting model
FitControl <- trainControl( method = “repeatedc
+ number = 4, repeats = 4)
Fit <- train(y ~ ., data = x, method = "gb
itControl, verbose = FALSE)
predict(fit,x_test, type= “prob")[,2]
+ trControl
predicteie
5
3
=
S
ry
2
rd
s
g
3
2
cS
z
tS
=
‘import Library
from skleara.neighbors import KNeighborsClassifier
‘Assumed you have, X (predictor) and ¥ (target) for
training data set and x test(predictor) of test dataset
create KNeightors classifier object aodel
ne ighborsclassifier(n_neighbor:
‘default value for a_neighvors is 5
‘irain the model using the training sets and check score
node Fit(X, 9)
Predict Output
predicted model.predict(x test)
Import Library
Library (kan)
x & cbind(x_train,y_train)
‘eFitting model
Fit