0% found this document useful (0 votes)
30 views2 pages

Nonlinear Statistical Models Notes

The document provides an overview of various classifiers and estimation methods in machine learning, including Decision Trees, Naive Bayes, Perceptron, Multilayer Perceptron, Maximum Likelihood Estimation, Parametric Density Estimation, and K Nearest Neighbors. Each method is briefly explained with an example to illustrate its application. The focus is on the theoretical aspects and mathematical foundations of these models.

Uploaded by

Avinash S
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views2 pages

Nonlinear Statistical Models Notes

The document provides an overview of various classifiers and estimation methods in machine learning, including Decision Trees, Naive Bayes, Perceptron, Multilayer Perceptron, Maximum Likelihood Estimation, Parametric Density Estimation, and K Nearest Neighbors. Each method is briefly explained with an example to illustrate its application. The focus is on the theoretical aspects and mathematical foundations of these models.

Uploaded by

Avinash S
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Nonlinear and Statistical Models - Theory Notes

Decision Tree Classifier

A decision tree splits data using feature tests. Nodes represent decisions; leaves represent outcomes.

Example: If Outlook=Sunny and Humidity=High => Play=No.

Naive Bayes Classifier

Applies Bayes theorem assuming feature independence.

P(C|X) = P(X|C)P(C)/P(X).

Example: For word 'free' in spam detection, P(Spam|free) 0.80.3 = 0.24.

Perceptron

A binary linear classifier. y = sign(w^T x + b).

Example: x=[2,3], w=[0.5,1], b=-1 y=sign(3)=+1.

Multilayer Perceptron (MLP)

A neural network with hidden layers and nonlinear activations. Learns complex functions via layers.

Example: Input sigmoid(W1x + b1) Output layer.

Maximum Likelihood Estimation (MLE)

Estimates parameters by maximizing the likelihood of data.

Example: For Gaussian, _MLE = (1/n)x_i.

Parametric Density Estimation

Assumes known data distribution (e.g., Gaussian) with fixed parameters.

Example: Estimate and to compute P(x) = N(x|,2).

K Nearest Neighbors (KNN)

Non-parametric. Predicts class/label based on majority vote (classification) or average (regression) of K


Nonlinear and Statistical Models - Theory Notes

nearest data points.

Example: K=5, majority of neighbors = Setosa Predict Setosa.

You might also like