0% found this document useful (0 votes)
22 views130 pages

Classification and Prediction

The document covers the concepts of classification and prediction in data mining, detailing methods such as Support Vector Machines, decision trees, and Bayesian classification. It emphasizes the importance of model construction and usage, accuracy measures, and the distinction between supervised and unsupervised learning. Additionally, it discusses data preparation issues and evaluation metrics for classification methods.

Uploaded by

Subrata Bose
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views130 pages

Classification and Prediction

The document covers the concepts of classification and prediction in data mining, detailing methods such as Support Vector Machines, decision trees, and Bayesian classification. It emphasizes the importance of model construction and usage, accuracy measures, and the distinction between supervised and unsupervised learning. Additionally, it discusses data preparation issues and evaluation metrics for classification methods.

Uploaded by

Subrata Bose
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 130

BITT-I : Lecture 9/10/11

(October 13, 15 & 20, 2008)

Classification and Prediction

Main Reference:
Data Mining : Concepts and
Techniques
(Second Edition), Elsevier, 2006
(Chapter 6: Slides available on
Internet)

Additional Reference:
Fundamentals of Artificial Neural Networks
May 15, 2025 1
Prentice-Hall India (MIT1995) (Chapters 3 & 5)
Data Mining:
Concepts and
Techniques
— Chapter 6 —

Jiawei Han
Department of Computer Science
University of Illinois at Urbana-Champaign
www.cs.uiuc.edu/~hanj
©2006 Jiawei Han and Micheline Kamber, All rights reserved

May 15, 2025 2


Chapter 6. Classification and
Prediction
 Support Vector Machines
What is classification? (SVM)
What is prediction?
 Associative classification
Issues regarding classification and prediction
 Lazy learners by
Classification (or decision
learning tree
frominduction
your neighbors)
 Other classification
Bayesian methods
classification
 Prediction
Rule-based classification
 Accuracy and by
Classification error measures
back propagation
 Ensemble methods
 Model selection
 Summary

May 15, 2025 3


Classification vs. Prediction
 Classification
 predicts categorical class labels (discrete or

nominal)
 classifies data (constructs a model) based on

the training set and the values (class labels) in


a classifying attribute and uses it in classifying
new data
 Prediction
 models continuous-valued functions, i.e.,

predicts unknown or missing values


 Typical applications
 Credit approval

 Target marketing

 Medical diagnosis

May 15,2025 4
Classification—A Two-Step
Process
 Model construction: describing a set of predetermined
classes
 Each tuple/sample is assumed to belong to a predefined

class, as determined by the class label attribute


 The set of tuples used for model construction is training

set
 The model is represented as classification rules,

decision trees, or mathematical formulae


 Model usage: for classifying future or unknown objects

 Estimate accuracy of the model


The known label of test sample is compared with the
classified result from the model

Accuracy rate is the percentage of test set samples
that are correctly classified by the model

Test set is independent of training set, otherwise
over-fitting will occur
 If the accuracy is acceptable, use the model to classify

data tuples whose class labels are not known


May 15, 2025 5
Process (1): Model
Construction

Classification
Algorithms
Training
Data

NAME RANK YEARS TENURED Classifier


Mike Assistant Prof 3 no (Model)
Mary Assistant Prof 7 yes
Bill Professor 2 yes
Jim Associate Prof 7 yes IF rank = ‘professor’
Dave Assistant Prof 6 no
OR years > 6
Anne Associate Prof 3 no
THEN tenured = ‘yes’
May 15, 2025 6
Process (2): Using the Model in
Prediction

Classifier

Testing
Data Unseen Data

(Jeff, Professor, 4)
NAME RANK YEARS TENURED
Tom Assistant Prof 2 no Tenured?
Merlisa Associate Prof 7 no
George Professor 5 yes
Joseph Assistant Prof 7 yes
May 15, 2025 7
Fig 6.1
Another example

May 15, 2025 8


Supervised Learning vs.
Unsupervised Learning

 Supervised learning (classification)


 Supervision: The training data (observations,
measurements, etc.) are accompanied by
labels indicating the class of the observations
 New data is classified based on the training
set
 Unsupervised learning (clustering)
 The class labels of training data is unknown
 Given a set of measurements, observations,
etc. with the aim of establishing the existence
May 15, 2025 of classes or clusters in the data 9
Chapter 6. Classification and
Prediction
 Support Vector Machines
What is classification? (SVM)
What is prediction?
 Associative classification
Issues regarding classification and prediction
 Lazy learners by
Classification (or decision
learning tree
frominduction
your neighbors)
 Other classification
Bayesian methods
classification
 Prediction
Rule-based classification
 Accuracy and by
Classification error measures
back propagation
 Ensemble methods
 Model selection
 Summary

May 15, 2025 10


Issues: Data Preparation

 Data cleaning
 Preprocess data in order to reduce noise and
handle missing values
 Relevance analysis (feature selection)
 Remove the irrelevant or redundant attributes
 Data transformation
 Generalize and/or normalize data

May 15, 2025 11


Issues: Evaluating Classification
Methods
 Accuracy
 classifier accuracy: predicting class label

 predictor accuracy: guessing value of predicted

attributes
 Speed
 time to construct the model (training time)

 time to use the model (classification/prediction

time)
 Robustness: handling noise and missing values
 Scalability: efficiency in disk-resident databases
 Interpretability
 understanding and insight provided by the

model
 Other measures, e.g., goodness of rules, such as
May 15, 2025 12
Chapter 6. Classification and
Prediction
 Support Vector Machines
What is classification? (SVM)
What is prediction?
 Associative classification
Issues regarding classification and prediction
 Lazy learners by
Classification (or decision
learning tree
frominduction
your neighbors)
 Other classification
Bayesian methods
classification
 Prediction
Rule-based classification
 Accuracy and by
Classification error measures
back propagation
 Ensemble methods
 Model selection
 Summary

May 15, 2025 13


Decision Tree Induction: Training
Dataset

age income student credit_rating buys_computer


<=30 high no fair no
This <=30 high no excellent no
31…40 high no fair yes
follows >40 medium no fair yes
an >40 low yes fair yes
example >40 low yes excellent no
31…40 low yes excellent yes
of <=30 medium no fair no
Quinlan’s <=30 low yes fair yes
ID3 >40 medium yes fair yes
<=30 medium yes excellent yes
(Playing 31…40 medium no excellent yes
Tennis) 31…40 high yes fair yes
>40 medium no excellent no

May 15, 2025 14


Output: A Decision Tree for
“buys_computer”

age?

<=30 overcast
31..40 >40

student? yes credit rating?

no yes excellent fair

no yes no yes

May 15, 2025 15


Algorithm for Decision Tree
Induction
 Basic algorithm (a greedy algorithm)
 Tree is constructed in a top-down recursive divide-and-

conquer manner
 At start, all the training examples are at the root

 Attributes are categorical (if continuous-valued, they are

discretized in advance)
 Examples are partitioned recursively based on selected

attributes
 Test attributes are selected on the basis of a heuristic or

statistical measure (e.g., information gain)


 Conditions for stopping partitioning
 All samples for a given node belong to the same class

 There are no remaining attributes for further partitioning

– majority voting is employed for classifying the leaf


 There are no samples left
May 15, 2025 16
Fig 6.4
Different Possibilities for Partitioning Tuples
Based on Splitting Criterion

May 15, 2025 17


Attribute Selection Measure:
Information Gain (ID3/C4.5)
 Select the attribute with the highest information
gain
 Let pi be the probability that an arbitrary tuple in D
belongs to class Ci, estimated by |Ci, D|/|D|
m
 Expected information (entropy)
Info( D ) needed to classify
  pi log 2 ( pi )
a tuple in D: i 1

 Information needed (after using A to | D j | D into v


v
split
Info A ( D)  Info( D j )
partitions) to classify D: j 1 | D |

 Information gained by branching on attribute A


Gain(A) Info(D)  Info A(D)
May 15, 2025 18
Attribute Selection: Information Gain
g Class P: buys_computer = 5 4
Infoage ( D )  I (2,3)  I (4,0)
“yes” 14 14
g Class N: buys_computer = 5
9 9 5 5
Info( D)“no”
I (9,5)  log 2 ( )  log 2 ( ) 0.940  I (3,2) 0.694
14 14 14 14 14
age pi ni I(p i, n i) 5
I (2,3) means “age <=30”
<=30 2 3 0.971 14
has 5 out of 14 samples,
31…40 4 0 0
with 2 yes’es and 3 no’s.
>40 3 2 0.971
age income student credit_rating buys_computer GainHence
(age) Info( D )  Infoage ( D ) 0.246
<=30 high no fair no
<=30 high no excellent no
31…40 high no fair yes
>40 medium no fair yes
>40 low yes fair yes Similarly,
>40
31…40
low
low
yes
yes
excellent
excellent
no
yes
Gain(income) 0.029
<=30 medium no fair no
<=30 low yes fair yes Gain( student ) 0.151
>40 medium yes fair yes
<=30
31…40
medium
medium
yes
no
excellent
excellent
yes
yes
Gain(credit _ rating ) 0.048
31…40 high yes fair yes
>40May 15, 2025
medium no excellent no 19
Fig 6.5
Attribute Age has the highest information
gain

May 15, 2025 20


Computing Information-Gain for
Continuous-Value Attributes
 Let attribute A be a continuous-valued attribute
 Must determine the best split point for A
 Sort the value A in increasing order
 Typically, the midpoint between each pair of
adjacent values is considered as a possible split
point
 (ai+ai+1)/2 is the midpoint between the values of ai and
ai+1
 The point with the minimum expected
information requirement for A is selected as the
split-point for A
 Split:
May 15, 2025 21
Gain Ratio for Attribute Selection
(C4.5)

 Information gain measure is biased towards


attributes with a large number of values
 C4.5 (a successor of ID3) uses gain ratio to
overcome the problem (normalization to
information gain) v |D | | Dj |
SplitInfo A ( D)  
j
log 2 ( )
j 1 |D| |D|

GainRatio(A) 4 4 6 6 4 4
SplitInfo A ( D ) = Gain(A)/SplitInfo(A)

log 2 ( )  log 2 ( )  log 2 ( ) 0.926
14 14 14 14 14 14
 Ex.
 gain_ratio(income) = 0.029/0.926 = 0.031
 The attribute with the maximum gain ratio is
selected as the splitting attribute
May 15, 2025 22
Gini index (CART, IBM
IntelligentMiner)
 If a data set D contains examples from n classes, gini index,
gini(D) is defined as n
gini( D) 1  p 2j
j 1
where pj is the relative frequency of class j in D
 If a data set D is split on A into two subsets D1 and D2, the
gini index gini(D) is defined as| D | |D |
gini A ( D)  1 gini( D1)  2 gini( D 2)
|D| |D|
 Reduction in Impurity:
gini( A) gini( D)  giniA ( D)
 The attribute provides the smallest ginisplit(D) (or the largest
reduction in impurity) is chosen to split the node (need to
enumerate all the possible splitting points for each attribute)
May 15, 2025 23
Gini index (CART, IBM
IntelligentMiner)
 Ex. D has 9 tuples in buys_computer = “yes” and 5 in “no”
2 2
9
    5
gini ( D) 1       0.459
 14   14 
 Suppose the attribute income partitions D into 10 in D1: {low,
medium} and 4 in Dgini  10   4
2 income{low, medium} ( D )   Gini ( D1 )    Gini( D1 )
 14   14 

but gini{medium,high} is 0.30 and thus the best since it is the


lowest
 All attributes are assumed continuous-valued
 May need other tools, e.g., clustering, to get the possible split
values
 Can be modified for categorical attributes
May 15, 2025 24
Comparing Attribute Selection
Measures
 The three measures, in general, return good results
but
 Information gain:

biased towards multivalued attributes
 Gain ratio:

tends to prefer unbalanced splits in which one
partition is much smaller than the others
 Gini index:

biased to multivalued attributes

has difficulty when # of classes is large

tends to favor tests that result in equal-sized
May 15, 2025
partitions and purity in both partitions 25
Other Attribute Selection
Measures
 CHAID: a popular decision tree algorithm, measure based on
χ2 test for independence
 C-SEP: performs better than info. gain and gini index in
certain cases
 G-statistics: has a close approximation to χ2 distribution
 MDL (Minimal Description Length) principle (i.e., the simplest
solution is preferred):
 The best tree as the one that requires the fewest # of bits
to both (1) encode the tree, and (2) encode the exceptions
to the tree
 Multivariate splits (partition based on multiple variable
combinations)
 CART: finds multivariate splits based on a linear comb. of
attrs.
May 15, 2025 26
Overfitting and Tree Pruning
 Overfitting: An induced tree may overfit the
training data
 Too many branches, some may reflect anomalies due to
noise or outliers
 Poor accuracy for unseen samples
 Two approaches to avoid overfitting
 Prepruning: Halt tree construction early—do not split a
node if this would result in the goodness measure falling
below a threshold

Difficult to choose an appropriate threshold
 Postpruning: Remove branches from a “fully grown” tree—
get a sequence of progressively pruned trees

May 15, 2025 Use a set of data different from the training data to 27
Fig 6.6
Tree Pruning

May 15, 2025 28


Fig 6.7
(a): Repetition, (b): Replication

May 15, 2025 29


Enhancements to Basic Decision Tree
Induction
 Allow for continuous-valued attributes
 Dynamically define new discrete-valued
attributes that partition the continuous attribute
value into a discrete set of intervals
 Handle missing attribute values
 Assign the most common value of the attribute
 Assign probability to each of the possible values
 Attribute construction
 Create new attributes based on existing ones
that are sparsely represented
 This reduces fragmentation, repetition, and
replication
May 15, 2025 30
Classification in Large Databases

 Classification—a classical problem extensively


studied by statisticians and machine learning
researchers
 Scalability: Classifying data sets with millions of
examples and hundreds of attributes with
reasonable speed
 Why decision tree induction in data mining?
 relatively faster learning speed (than other

classification methods)
 convertible to simple and easy to understand

classification rules
 can use SQL queries for accessing databases

 comparable classification accuracy with other


May 15, 2025 31
Scalable Decision Tree Induction
Methods

 SLIQ (EDBT’96 — Mehta et al.)


 Builds an index for each attribute and only class

list and the current attribute list reside in


memory
 SPRINT (VLDB’96 — J. Shafer et al.)
 Constructs an attribute list data structure

 PUBLIC (VLDB’98 — Rastogi & Shim)


 Integrates tree splitting and tree pruning: stop

growing the tree earlier


 RainForest (VLDB’98 — Gehrke, Ramakrishnan &
Ganti)
 Builds an AVC-list (attribute, value, class label)

 BOAT (PODS’99 — Gehrke, Ganti, Ramakrishnan &


May 15, 2025 32
Table 6.2
Tuple data for the class buys_computer

May 15, 2025 33


Fig 6.9
Attribute list data structure in SPRINT for the tuple data of
Table 6.2

May 15, 2025 34


Scalability Framework for
RainForest

 Separates the scalability aspects from the criteria that


determine the quality of the tree
 Builds an AVC-list: AVC (Attribute, Value, Class_label)
 AVC-set (of an attribute X )
 Projection of training dataset onto the attribute X and
class label where counts of individual class label are
aggregated
 AVC-group (of a node n )
 Set of AVC-sets of all predictor attributes at the node n

May 15, 2025 35


Rainforest: Training Set and Its AVC
Sets

Training Examples AVC-set on Age AVC-set on income


age income studentcredit_rating
buys_computerAge Buy_Computer income Buy_Computer

<=30 high no fair no yes no


yes no
<=30 high no excellent no
high 2 2
31…40 high no fair yes <=30 3 2
31..40 4 0 medium 4 2
>40 medium no fair yes
>40 low yes fair yes >40 3 2 low 3 1

>40 low yes excellent no


31…40 low yes excellent yes
AVC-set on
<=30 medium no fair no AVC-set on Student
credit_rating
<=30 low yes fair yes
student Buy_Computer
>40 medium yes fair yes Buy_Computer

<=30 medium yes excellent yes yes no Credit


rating yes no
31…40 medium no excellent yes yes 6 1
fair 6 2
31…40 high yes fair yes no 3 4
excellent 3 3
>40 medium no excellent no
May 15, 2025 36
Data Cube-Based Decision-Tree
Induction
 Integration of generalization with decision-tree
induction (Kamber et al.’97)
 Classification at primitive concept levels
 E.g., precise temperature, humidity, outlook, etc.
 Low-level concepts, scattered classes, bushy
classification-trees
 Semantic interpretation problems
 Cube-based multi-level classification
 Relevance analysis at multi-levels
 Information-gain analysis with dimension + level
May 15, 2025 37
BOAT
Bootstrapped Optimistic Algorithm for Tree
Construction

 Use a statistical technique called bootstrapping to create


several smaller samples (subsets), each fits in memory
 Each subset is used to create a tree, resulting in several
trees
 These trees are examined and used to construct a new
tree T’
 It turns out that T’ is very close to the tree that would
be generated using the whole data set together
 Adv: requires only two scans of DB, an incremental alg.

May 15, 2025 38


Presentation of Classification Results

May 15, 2025 39


Visualization of a Decision Tree in SGI/MineSet 3.0

May 15, 2025 40


Interactive Visual Mining by Perception-
Based Classification (PBC)

May 15, 2025 41


Chapter 6. Classification and
Prediction
 Support Vector Machines
What is classification? (SVM)
What is prediction?
 Associative classification
Issues regarding classification and prediction
 Lazy learners by
Classification (or decision
learning tree
frominduction
your neighbors)
 Other classification
Bayesian methods
classification
 Prediction
Rule-based classification
 Accuracy and by
Classification error measures
back propagation
 Ensemble methods
 Model selection
 Summary

May 15, 2025 42


Bayesian Classification:
Why?
 A statistical classifier: performs probabilistic
prediction, i.e., predicts class membership
probabilities
 Foundation: Based on Bayes’ Theorem.
 Performance: A simple Bayesian classifier, naïve
Bayesian classifier, has comparable performance
with decision tree and selected neural network
classifiers
 Incremental: Each training example can
incrementally increase/decrease the probability
that a hypothesis is correct — prior knowledge can
be combined with observed data
 Standard: Even when Bayesian methods are
computationally intractable, they can provide a
standard of optimal decision making against which
May 15, 2025 43
Bayesian Theorem: Basics
 Let X be a data sample (“evidence”): class label is
unknown
 Let H be a hypothesis that X belongs to class C
 Classification is to determine P(H|X), the probability
that the hypothesis holds given the observed data
sample X
 P(H) (prior probability), the initial probability
 E.g., X will buy computer, regardless of age,
income, …
 P(X): probability that sample data is observed
 P(X|H) (posteriori probability), the probability of
observing the sample X, given that the hypothesis
May 15, 2025 44
Bayesian Theorem

 Given training data X, posteriori probability of a


hypothesis H, P(H|X), follows the Bayes theorem

P( H | X) P(X | H ) P( H )
P(X)
 Informally, this can be written as
posteriori = likelihood x prior/evidence
 Predicts X belongs to C2 iff the probability P(Ci|X) is
the highest among all the P(Ck|X) for all the k
classes
 Practical difficulty: require initial knowledge of
May many
15, 2025 probabilities, significant computational cost 45
Towards Naïve Bayesian
Classifier
 Let D be a training set of tuples and their
associated class labels, and each tuple is
represented by an n-D attribute vector X = (x1, x2,
…, xn)
 Suppose there are m classes C1, C2, …, Cm.
 Classification is to derive the maximum posteriori,
i.e., the maximal P(Ci|X)
P(X | C )P(C )
 This can be derived from Bayes’
P(C | X) theorem i i
i P(X)

 Since P(X) is constant for Pall | X) P(X | C


(C classes, )P(C )
only
i i i

needs to be maximized
May 15, 2025 46
Derivation of Naïve Bayes
Classifier
 A simplified assumption: attributes are
conditionally independent (i.e., no dependence
relation between attributes):
n
P ( X | C i )   P ( x | C i ) P ( x | C i ) P ( x | C i ) ...P ( x | C i )
k 1 2 n
k 1
 This greatly reduces the computation cost: Only
counts the class distribution
 If Ak is categorical, P(xk|Ci) is the # of tuples in Ci
having value xk for Ak divided by |Ci, D| (# of tuples
of Ci in D)
 If Ak is continous-valued, P(xk|Ci) is usually
computed based on Gaussian distribution with a
2
( x  )
1 
g ( x,  ,  ) 
2
2
mean μ and standard deviation σ 2  e

P ( X | C i )  g ( xk ,  Ci ,  Ci )
and P(xk|Ci) is
May 15, 2025 47
Naïve Bayesian Classifier: Training
Dataset
age income studentcredit_rating
buys_compu
<=30 high no fair no
<=30 high no excellent no
Class: 31…40 high no fair yes
C1:buys_computer = >40 medium no fair yes
‘yes’ >40 low yes fair yes
C2:buys_computer = ‘no’
>40 low yes excellent no
31…40 low yes excellent yes
Data sample
X = (age <=30,
<=30 medium no fair no
Income = medium, <=30 low yes fair yes
Student = yes >40 medium yes fair yes
Credit_rating = Fair) <=30 medium yes excellent yes
31…40 medium no excellent yes
31…40 high yes fair yes
>40 medium no excellent no
May 15, 2025 48
Naïve Bayesian Classifier: An
Example

P(Ci): P(buys_computer = “yes”) = 9/14 = 0.643
P(buys_computer = “no”) = 5/14= 0.357


Compute P(X|Ci) for each class
P(age = “<=30” | buys_computer = “yes”) = 2/9 = 0.222
P(age = “<= 30” | buys_computer = “no”) = 3/5 = 0.6
P(income = “medium” | buys_computer = “yes”) = 4/9 = 0.444
P(income = “medium” | buys_computer = “no”) = 2/5 = 0.4
P(student = “yes” | buys_computer = “yes) = 6/9 = 0.667
P(student = “yes” | buys_computer = “no”) = 1/5 = 0.2
P(credit_rating = “fair” | buys_computer = “yes”) = 6/9 = 0.667
P(credit_rating = “fair” | buys_computer = “no”) = 2/5 = 0.4

 X = (age <= 30 , income = medium, student = yes, credit_rating =


fair)

P(X|Ci) : P(X|buys_computer = “yes”) = 0.222 x 0.444 x 0.667 x 0.667 = 0.044


P(X|buys_computer = “no”) = 0.6 x 0.4 x 0.2 x 0.4 = 0.019
P(X|Ci)*P(Ci) : P(X|buys_computer = “yes”) * P(buys_computer = “yes”) = 0.028
P(X|buys_computer = “no”) * P(buys_computer = “no”) =
0.007

Therefore,
May 15, 2025 X belongs to class (“buys_computer = yes”) 49
Avoiding the 0-Probability
Problem
 Naïve Bayesian prediction requires each conditional prob. be
non-zero. Otherwise, the predicted prob. will be zero
n
P( X | C i)   P( x k | C i)
k 1
 Ex. Suppose a dataset with 1000 tuples, income=low (0),
income= medium (990), and income = high (10),
 Use Laplacian correction (or Laplacian estimator)
 Adding 1 to each case

Prob(income = low) = 1/1003


Prob(income = medium) = 991/1003
Prob(income = high) = 11/1003
 The “corrected” prob. estimates are close to their

“uncorrected” counterparts

May 15, 2025 50


Naïve Bayesian Classifier:
Comments
 Advantages
 Easy to implement

 Good results obtained in most of the cases

 Disadvantages
 Assumption: class conditional independence,

therefore loss of accuracy


 Practically, dependencies exist among variables


E.g., hospitals: patients: Profile: age, family history, etc.
Symptoms: fever, cough etc., Disease: lung cancer,
diabetes, etc.

Dependencies among these cannot be modeled by
Naïve Bayesian Classifier
 How to deal with these dependencies?
 Bayesian Belief Networks
May 15, 2025 51
Bayesian Belief Networks

 Bayesian belief network allows a subset of the


variables conditionally independent
 A graphical model of causal relationships
 Represents dependency among the variables

 Gives a specification of joint probability

distribution
 Nodes: random variables
 Links: dependency
X Y  X and Y are the parents of Z, and
Y is the parent of P
Z  No dependency between Z and P
P  Has no loops or cycles
May 15, 2025 52
Bayesian Belief Network: An
Example

Family The conditional probability


Smoker
History table (CPT) for variable
LungCancer:
(FH, S) (FH, ~S) (~FH, S) (~FH, ~S)

LC 0.8 0.5 0.7 0.1


LungCancer Emphysema ~LC 0.2 0.5 0.3 0.9

CPT shows the conditional probability


for each possible combination of its
parents

PositiveXRay Dyspnea Derivation of the probability of a


particular combination of values
of X, from CPT:
n
Bayesian Belief Networks P ( x1 ,..., xn )   P ( x i | Parents (Y i ))
i 1
May 15, 2025 53
Training Bayesian Networks
 Several scenarios:
 Given both the network structure and all

variables observable: learn only the CPTs


 Network structure known, some hidden

variables: gradient descent (greedy hill-


climbing) method, analogous to neural network
learning
 Network structure unknown, all variables

observable: search through the model space to


reconstruct network topology
 Unknown structure, all hidden variables: No

good algorithms known for this purpose


 Ref. D. Heckerman: Bayesian networks for data

May mining
15, 2025 54
Chapter 6. Classification and
Prediction
 Support Vector Machines
What is classification? (SVM)
What is prediction?
 Associative classification
Issues regarding classification and prediction
 Lazy learners by
Classification (or decision
learning tree
frominduction
your neighbors)
 Other classification
Bayesian methods
classification
 Prediction
Rule-based classification
 Accuracy and by
Classification error measures
back propagation
 Ensemble methods
 Model selection
 Summary

May 15, 2025 55


Using IF-THEN Rules for Classification
 Represent the knowledge in the form of IF-THEN rules
R: IF age = youth AND student = yes THEN buys_computer = yes
 Rule antecedent/precondition vs. rule consequent
 Assessment of a rule: coverage and accuracy
 ncovers = # of tuples covered by R
 ncorrect = # of tuples correctly classified by R
coverage(R) = ncovers /|D| /* D: training data set */
accuracy(R) = ncorrect / ncovers
 If more than one rule is triggered, need conflict resolution
 Size ordering: assign the highest priority to the triggering rules
that has the “toughest” requirement (i.e., with the most attribute
test)
 Class-based ordering: decreasing order of prevalence or
misclassification cost per class
 Rule-based ordering (decision list): rules are organized into one
long priority list, according to some measure of rule quality or by
May 15, 2025 56
Rule Extraction from a Decision Tree
age?

<=30 31..40 >40


 Rules are easier to understand than large student? credit rating?
yes
trees
no yes excellent fair
 One rule is created for each path from the no yes
no yes
root to a leaf
 Each attribute-value pair along a path
forms a conjunction: the leaf holds the
class prediction
 Example: Rule extraction from our buys_computer decision-tree
 Rules are mutually exclusive and
IF age = young AND student = no THEN buys_computer = no
exhaustive
IF age = young AND student = yes THEN buys_computer = yes
IF age = mid-age THEN buys_computer = yes
IF age = old AND credit_rating = excellent THEN buys_computer =
yes
IF age = young AND credit_rating = fair THEN buys_computer = no
May 15, 2025 57
Rule Extraction from the Training
Data

 Sequential covering algorithm: Extracts rules directly from


training data
 Typical sequential covering algorithms: FOIL, AQ, CN2, RIPPER
 Rules are learned sequentially, each for a given class Ci will
cover many tuples of Ci but none (or few) of the tuples of other
classes
 Steps:
 Rules are learned one at a time
 Each time a rule is learned, the tuples covered by the rules
are removed
 The process repeats on the remaining tuples unless
termination condition, e.g., when no more training examples
or when the quality of a rule returned is below a user-
May 15, 2025 58
Fig 6.12

May 15, 2025 59


Fig 6.13
A general-to-specific search through rule space

May 15, 2025 60


Fig 6.14
Choosing between two rules based on accuracy
(Here, rule R2 has higher accuracy than R1, but much less coverage.)

May 15, 2025 61


How to Learn-One-Rule?
 Start with the most general rule possible: condition = empty
 Adding new attributes by adopting a greedy depth-first
strategy
 Picks the one that most improves the rule quality
 Rule-Quality measures: consider both coverage and accuracy
 Foil-gain (in FOIL & RIPPER): assesses pos ' info_gain pos by
FOIL _ Gain  pos '(log 2  log 2 )
extending condition pos 'neg ' pos  neg

It favors rules that have high accuracy and cover many positive
tuples
pos  neg
FOIL _ Prune ( R ) 
Rule pruning based on an independentpos  negset of test tuples

Pos/neg are # of positive/negative tuples covered by R.


May 15, 2025 62
Chapter 6. Classification and
Prediction
 Support Vector Machines
What is classification? (SVM)
What is prediction?
 Associative classification
Issues regarding classification and prediction
 Lazy learners by
Classification (or decision
learning tree
frominduction
your neighbors)
 Other classification
Bayesian methods
classification
 Prediction
Rule-based classification
 Accuracy and by
Classification error measures
back propagation
 Ensemble methods
 Model selection
 Summary

May 15, 2025 63


Classification: A Mathematical
Mapping
 Classification:
 predicts categorical class labels

 E.g., Personal homepage classification


 x = (x , x , x , …), y = +1 or –1
i 1 2 3 i
 x1 : # of a word “homepage”
 x2 : # of a word “welcome”
 Mathematically
 x  X = n, y  Y = {+1, –1}

 We want a function f: X  Y

May 15, 2025 64


Linear Classification
 Binary Classification
problem
 The data above the red
line belongs to class ‘x’
x  The data below red line
x
x x x belongs to class ‘o’
x  Examples: SVM,
x x x o
Perceptron,
o Probabilistic Classifiers
x o o
ooo
o o
o o o o

May 15, 2025 65


Perceptron
• Vector: x, w
x2 • Scalar: x, y, w
Input: {(x1, y1), …}
Output: classification function
f(x)
f(xi) > 0 for yi = +1
f(xi) < 0 for yi = -1
f(x) => wx + b = 0
or w1x1+w2x2+b = 0

Update W only
x1 when
misclassification

May 15, 2025 66


Classification by
Backpropagation

 Backpropagation: A neural network learning


algorithm
 Started by psychologists and neurobiologists to
develop and test computational analogues of
neurons
 A neural network: A set of connected input/output
units where each connection has a weight
associated with it
 During the learning phase, the network learns
by adjusting the weights so as to be able to
predict the correct class label of the input tuples
May 15, 2025 67
Neural Network as a Classifier
 Weakness
 Long training time
 Require a number of parameters typically best determined
empirically, e.g., the network topology or ``structure."
 Poor interpretability: Difficult to interpret the symbolic
meaning behind the learned weights and of ``hidden
units" in the network
 Strength
 High tolerance to noisy data
 Ability to classify untrained patterns
 Well-suited for continuous-valued inputs and outputs
 Successful on a wide array of real-world data
 Algorithms are inherently parallel
 Techniques have recently been developed for the
extraction of rules from trained neural networks
May 15, 2025 68
Multi-Layer Feed Forward Backprop Network

 Material covered from Hassoun’s Book (uploaded


separately; the following material is from the Text
book)
 Single Unit learning (Chapter 3)

 Multi-layer Feed Forward Network Learning (Error

Backpropapgation) (Chapter 5)
 This is a general setting

 Make a match of this algo. with the one given in

the Text book, for examples:


 number of hidden layers

 activation function

 error function (Batch learning vs. Incremental

learning)
 learning rate

May 15, bias node or bias weight


 2025 69
A Neuron (= a perceptron)

- mk
x0 w0
x1 w1
å f
output y
xn wn
For Example
n
Input weight weighted Activation y sign( wi xi   k )
vector x vector w sum function i 0

 The n-dimensional input vector x is mapped into variable y


by means of the scalar product and a nonlinear function
mapping
May 15, 2025 70
A Multi-Layer Feed-Forward Neural
Network

Output vector

Err j O j (1  O j ) Errk w jk
Output layer k

 j  j  (l) Err j
wij wij  (l ) Err j Oi
Hidden layer Err j O j (1  O j )(T j  O j )
wij 1
Oj   Ij
1 e
Input layer
I j  wij Oi   j
i
Input vector: X
May 15, 2025 71
How A Multi-Layer Neural Network
Works?
 The inputs to the network correspond to the attributes
measured for each training tuple
 Inputs are fed simultaneously into the units making up the
input layer
 They are then weighted and fed simultaneously to a
hidden layer
 The number of hidden layers is arbitrary, although usually
only one
 The weighted outputs of the last hidden layer are input to
units making up the output layer, which emits the
network's prediction
 The network is feed-forward in that none of the weights
cycles back to an input unit or to an output unit of a
previous
May 15, 2025 layer 72
Defining a Network Topology
 First decide the network topology: # of units in
the input layer, # of hidden layers (if > 1), # of
units in each hidden layer, and # of units in the
output layer
 Normalizing the input values for each attribute
measured in the training tuples to [0.0—1.0]
 One input unit per domain value, each initialized to
0
 Output, if for classification and more than two
classes, one output unit per class is used
 Once a network has been trained and its accuracy
is unacceptable, repeat the training process with
May a
15, different
2025 network topology or a different set of 73
Backpropagation
 Iteratively process a set of training tuples & compare the
network's prediction with the actual known target value
 For each training tuple, the weights are modified to minimize
the mean squared error between the network's prediction
and the actual target value
 Modifications are made in the “backwards” direction: from
the output layer, through each hidden layer down to the first
hidden layer, hence “backpropagation”
 Steps
 Initialize weights (to small random #s) and biases in the

network
 Propagate the inputs forward (by applying activation

function)
 Backpropagate the error (by updating weights and biases)

May 15, 2025 74


Fig 6.16

May 15, 2025 75


Backpropagation and
Interpretability
 Efficiency of backpropagation: Each epoch (one interation
through the training set) takes O(|D| * w), with |D| tuples and
w weights, but # of epochs can be exponential to n, the
number of inputs, in the worst case
 Rule extraction from networks: network pruning
 Simplify the network structure by removing weighted links
that have the least effect on the trained network
 Then perform link, unit, or activation value clustering
 The set of input and activation values are studied to derive
rules describing the relationship between the input and
hidden unit layers
 Sensitivity analysis: assess the impact that a given input
variable has on a network output. The knowledge gained
from this analysis can be represented in rules

May 15, 2025 76


Chapter 6. Classification and
Prediction
 Support Vector Machines
What is classification? (SVM)
What is prediction?
 Associative classification
Issues regarding classification and prediction
 Lazy learners by
Classification (or decision
learning tree
frominduction
your neighbors)
 Other classification
Bayesian methods
classification
 Prediction
Rule-based classification
 Accuracy and by
Classification error measures
back propagation
 Ensemble methods
 Model selection
 Summary

May 15, 2025 77


SVM—Support Vector Machines
 A new classification method for both linear and
nonlinear data
 It uses a nonlinear mapping to transform the
original training data into a higher dimension
 With the new dimension, it searches for the linear
optimal separating hyperplane (i.e., “decision
boundary”)
 With an appropriate nonlinear mapping to a
sufficiently high dimension, data from two classes
can always be separated by a hyperplane
 SVM finds this hyperplane using support vectors
(“essential” training tuples) and margins (defined
May by the support vectors)
15, 2025 78
SVM—History and Applications
 Vapnik and colleagues (1992)—groundwork from
Vapnik & Chervonenkis’ statistical learning theory
in 1960s
 Features: training can be slow but accuracy is high
owing to their ability to model complex nonlinear
decision boundaries (margin maximization)
 Used both for classification and prediction
 Applications:
 handwritten digit recognition, object
recognition, speaker identification,
benchmarking time-series prediction tests
May 15, 2025 79
SVM—General Philosophy

Small Margin Large Margin


Support Vectors

May 15, 2025 80


SVM—Margins and Support
Vectors

May 15, 2025 81


SVM—When Data Is Linearly
Separable

Let data D be (X1, y1), …, (X|D|, y|D|), where Xi is the set of training
tuples associated with the class labels yi
There are infinite lines (hyperplanes) separating the two classes but
we want to find the best one (the one that minimizes classification
error on unseen data)
SVM searches for the hyperplane with the largest margin, i.e.,
maximum marginal hyperplane (MMH)
May 15, 2025 82
SVM—Linearly Separable
 A separating hyperplane can be written as
W●X+b=0
where W={w1, w2, …, wn} is a weight vector and b a scalar
(bias)
 For 2-D it can be written as
w 0 + w 1 x1 + w 2 x2 = 0
 The hyperplane defining the sides of the margin:
H 1: w 0 + w 1 x1 + w 2 x2 ≥ 1 for yi = +1, and
H2: w0 + w1 x1 + w2 x2 ≤ – 1 for yi = –1
 Any training tuples that fall on hyperplanes H1 or H2 (i.e., the
sides defining the margin) are support vectors
 This becomes a constrained (convex) quadratic
optimization problem: Quadratic objective function and
linear constraints  Quadratic Programming (QP) 
May 15, 2025 83
Why Is SVM Effective on High Dimensional
Data?

 The complexity of trained classifier is characterized by the #


of support vectors rather than the dimensionality of the data
 The support vectors are the essential or critical training
examples —they lie closest to the decision boundary (MMH)
 If all other training examples are removed and the training is
repeated, the same separating hyperplane would be found
 The number of support vectors found can be used to
compute an (upper) bound on the expected error rate of the
SVM classifier, which is independent of the data
dimensionality
 Thus, an SVM with a small number of support vectors can
have good generalization, even when the dimensionality of
May 15, 2025 84
A2

SVM—Linearly Inseparable
A1
 Transform the original input data into a higher
dimensional space

 Search for a linear separating hyperplane in the


new space
May 15, 2025 85
SVM vs. Neural Network

 SVM  Neural Network


 Relatively new  Relatively old
concept concept
 Deterministic
 Nondeterministic
algorithm algorithm (both
incremental and
 Nice
batch learning;
Generalization local minima)
properties  Generalizes well
 Hard to learn – (over fitting with
learned in batch long training)
May 15, 2025 86
SVM Related Links

 SVM Website
 http://www.kernel-machines.org/
 Representative implementations
 LIBSVM: an efficient implementation of SVM, multi-class
classifications, nu-SVM, one-class SVM, including also
various interfaces with java, python, etc.
 SVM-light: simpler but performance is not better than
LIBSVM, support only binary classification and only C
language
 SVM-torch: another recent implementation also written in
May 15, 2025 87
SVM—Introduction
Literature
 “Statistical Learning Theory” by Vapnik: extremely hard to
understand, containing many errors too.
 C. J. C. Burges.
A Tutorial on Support Vector Machines for Pattern Recognition.
Knowledge Discovery and Data Mining, 2(2), 1998.
(available on the internet)
 Better than the Vapnik’s book, but still written too hard for
introduction, and the examples are so not-intuitive
 The book “An Introduction to Support Vector Machines” by N.
Cristianini and J. Shawe-Taylor
 Also written hard for introduction, but the explanation about the
mercer’s theorem is better than above literatures
 The neural network book by Haykins
 Contains one nice chapter of SVM introduction
May 15, 2025 88
Chapter 6. Classification and
Prediction
 Support Vector Machines
What is classification? (SVM)
What is prediction?
 Associative classification
Issues regarding classification and prediction
 Lazy learners by
Classification (or decision
learning tree
frominduction
your neighbors)
 Other classification
Bayesian methods
classification
 Prediction
Rule-based classification
 Accuracy and by
Classification error measures
back propagation
 Ensemble methods
 Model selection
 Summary

May 15, 2025 89


Associative Classification
 Associative classification
 Association rules are generated and analyzed for use in classification
 Search for strong associations between frequent patterns (conjunctions of
attribute-value pairs) and class labels
 Classification: Based on evaluating a set of rules in the form of
P1 ^ p2 … ^ pl  “Aclass = C” (conf, sup), for example,
age<=30^credit=fair buys_computer=yes [support=20%,
confidence =93%]
 Why effective?
 It explores highly confident associations among multiple attributes (decision-
tree induction considers only one attribute at a time)
 Associative classification found to be more accurate than some traditional
classification methods, such as C4.5.

May 15, 2025 90


Associative Classification Methods
 CBA (Classification by Association)
 Mine association rules in the form of

Cond-set (a set of attribute-value pairs)  class label
 Build classifier: Organize rules according to decreasing
precedence based on confidence and then support
 Multiple passes for longer rules (similar to Apriori
algorithm)
 CMAR (Classification based on Multiple Association
Rules)
 A variant of FP-growth algo. (scans the dataset twice)
 Uses enhanced FP Tree that maintains the distribution of
class labels among tuples satisfying each frequent
itemset
Prune rules with shorter antecedent and less confidence91
 2025
May 15,
Associative Classification May Achieve
High Accuracy and Efficiency (Cong et al.
SIGMOD05)

May 15, 2025 92


Chapter 6. Classification and
Prediction
 Support Vector Machines
What is classification? (SVM)
What is prediction?
 Associative classification
Issues regarding classification and prediction
 Lazy learners by
Classification (or decision
learning tree
frominduction
your neighbors)
 Other classification
Bayesian methods
classification
 Prediction
Rule-based classification
 Accuracy and by
Classification error measures
back propagation
 Ensemble methods
 Model selection
 Summary

May 15, 2025 93


Lazy Learning vs. Eager Learning
 Lazy learning vs. Eager learning
 Lazy learning (e.g., instance-based learning):

Simply stores training data (or only minor


processing) and waits until it is given a test tuple
 Eager learning (the above discussed methods):

Given a set of training set, constructs a


classification model before receiving new (e.g.,
test) data to classify
 Lazy: less time in training but more time in

predicting
 Eager: more time in training and less time in

predicting
 Accuracy

May 15, Lazy methods uses a rich (complex) hypothesis


 2025 94
Lazy Learner: Instance-Based
Methods
 Instance-based learning:
 Store training examples and delay the processing

(“lazy evaluation”) until a new instance must be


classified
 Two examples
 k-nearest neighbor approach


Instances represented as points in a Euclidean
space (consider real and discrete valued
attribute - normalized; also missing values;
decide on k)
 Case-based reasoning


Uses symbolic representations and knowledge-
based inference (store cases - heterogenious
data);

Used for complex problems –(e.g. treatment of
a patient – find similar cases and synthesize a
May 15, 2025 95
The k-Nearest Neighbor
Algorithm
 All instances correspond to points in the n-D
space
 The nearest neighbor are defined in terms of
Euclidean distance, dist(X1, X2)
 Target function could be discrete- or real-
valued
 For discrete-valued, k-NN returns the most
common value among the k training examples
nearest to xq
 Vonoroi diagram: the decision surface induced
_
.
by 1-NN for_ a _typical set of training examples
_
+
_ .
+
xq + . . .
May 15, 2025
_
+ . 96
Discussion on the k-NN
Algorithm
 k-NN for real-valued prediction for a given unknown
tuple
 Returns the mean values of the k nearest
neighbors
 Distance-weighted nearest neighbor algorithm 1
w
 Weight the contribution of each of the k d ( xq , x )2
i
neighbors according to their distance to the
query xq

Give greater weight to closer neighbors
 Robust to noisy data by averaging k-nearest
neighbors
 Curse of dimensionality: distance between
May 15, 2025 97
Case-Based Reasoning (CBR)
 CBR: Uses a database of problem solutions to solve new
problems
 Store symbolic description (tuples or cases)—not points in a
Euclidean space
 Applications: Customer-service (product-related diagnosis),
legal ruling
 Methodology
 Instances represented by rich symbolic descriptions (e.g.,
function graphs)
 Search for similar cases, multiple retrieved cases may be
combined
 Tight coupling between case retrieval, knowledge-based
reasoning, and problem solving
 Challenges
 Find a good similarity metric
May 15, 2025 98
Chapter 6. Classification and
Prediction
 Support Vector Machines
What is classification? (SVM)
What is prediction?
 Associative classification
Issues regarding classification and prediction
 Lazy learners by
Classification (or decision
learning tree
frominduction
your neighbors)
 Other classification
Bayesian methods
classification
 Prediction
Rule-based classification
 Accuracy and by
Classification error measures
back propagation
 Ensemble methods
 Model selection
 Summary

May 15, 2025 99


Genetic Algorithms (GA)
 Genetic Algorithm: based on an analogy to biological evolution
 An initial population is created consisting of randomly
generated rules
 Each rule is represented by a string of bits
 E.g., if A1 and ¬A2 then C2 can be encoded as 100
 If an attribute has k > 2 values, k bits can be used
 Based on the notion of survival of the fittest, a new
population is formed to consist of the fittest rules and their
offsprings
 The fitness of a rule is represented by its classification
accuracy on a set of training examples
 Offsprings are generated by crossover and mutation
 The process continues until a population P evolves when each
rule in P satisfies a prespecified threshold
May 15, 2025 100
Rough Set Approach
 Rough sets are used to approximately or “roughly”
define equivalent classes
 A rough set for a given class C is approximated by two sets:
a lower approximation (certain to be in C) and an upper
approximation (cannot be described as not belonging to C)
 Finding the minimal subsets (reducts) of attributes for
feature reduction is NP-hard but a discernibility matrix
(which stores the differences between attribute values for
each pair of data tuples) is used to reduce the computation
intensity

May 15, 2025 101


Fuzzy Set
Approaches

 Fuzzy logic uses truth values between 0.0 and 1.0


to represent the degree of membership (such as
using fuzzy membership graph)
 Attribute values are converted to fuzzy values

 e.g., income is mapped into the discrete

categories {low, medium, high} with fuzzy


values calculated
 For a given new sample, more than one fuzzy

value may apply


 Each applicable rule contributes a vote for

membership in the categories


 Typically, the truth values for each predicted

category are summed, and these sums are


May 15, 2025 102
Fuzzy Reasoning: MATLAB
Demo
 >> ruleview mam21

May 15, 2025 103


Chapter 6. Classification and
Prediction
 Support Vector Machines
What is classification? (SVM)
What is prediction?
 Associative classification
Issues regarding classification and prediction
 Lazy learners by
Classification (or decision
learning tree
frominduction
your neighbors)
 Other classification
Bayesian methods
classification
 Prediction
Rule-based classification
 Accuracy and by
Classification error measures
back propagation
 Ensemble methods
 Model selection
 Summary

May 15, 2025 104


What Is Prediction?
 (Numerical) prediction is similar to classification
 construct a model

 use model to predict continuous or ordered value for a

given input
 Prediction is different from classification
 Classification refers to predict categorical class label

 Prediction models continuous-valued functions

 Major method for prediction: regression


 model the relationship between one or more independent

or predictor variables and a dependent or response


variable
 Regression analysis
 Linear and multiple regression

 Non-linear regression

 Other regression methods: generalized linear model (a non-

linear model can be converted to a linear model using a link


function), Poisson regression, log-linear models, regression
May 15, 2025 105
Linear Regression
 Linear regression: involves a response variable y and a single
predictor variable x
y = w0 + w1 x
where w0 (y-intercept) and w1 (slope) are regression
coefficients
| D|
 Method of least squares:
 ( xi  x )( yi estimates
 y) the best-fitting straight
line w i 1 |D| w y  w x
1 0 1
 ( xi  x ) 2
i 1

 Multiple linear regression: involves more than one predictor


variable
 Training data is of the form (X1, y1), (X2, y2),…, (X|D|, y|D|)
 Ex. For 2-D data, we may have: y = w0 + w1 x1+ w2 x2
May 15, 2025 106
Fig 6.26
Scatter Diagram

May 15, 2025 107


Nonlinear Regression
 Some nonlinear models can be modeled by a
polynomial function
 A polynomial regression model can be transformed
into linear regression model. For example,
y = w0 + w1 x + w2 x2 + w3 x3
convertible to linear with new variables: x2 = x2, x3=
x3
y = w0 + w1 x + w2 x2 + w3 x3
 Other functions, such as power function, can also be
transformed to linear model
 Some models are intractable nonlinear (e.g., sum of
exponential terms)
 possible to obtain least square estimates through
May 15, 2025 108
Predictive Modeling in Multidimensional
Databases
 Predictive modeling: Predict data values or
construct generalized linear models based on the
database data
 One can only predict value ranges or category
distributions
 Method outline:
 Minimal generalization
 Attribute relevance analysis
 Generalized linear model construction
 Prediction
 Determine the major factors which influence the
prediction
 Data relevance analysis: uncertainty
May 15, 2025 109
Prediction: Numerical Data

May 15, 2025 110


Prediction: Categorical Data

May 15, 2025 111


Chapter 6. Classification and
Prediction
 Support Vector Machines
What is classification? (SVM)
What is prediction?
 Associative classification
Issues regarding classification and prediction
 Lazy learners by
Classification (or decision
learning tree
frominduction
your neighbors)
 Other classification
Bayesian methods
classification
 Prediction
Rule-based classification
 Accuracy and by
Classification error measures
back propagation
 Ensemble methods
 Model selection
 Summary

May 15, 2025 112


Fig 6.28
Confusion Matrix

May 15, 2025 113


Classifier Accuracy C1 C2

Measures C1 True positive False


negative
C2 False True negative
classes buy_computer = buy_computer =positive
total recognition(%
yes no )
buy_computer = 6954 46 7000 99.34
yes
buy_computer = 412 2588 3000 86.27
 Accuracy
no of a classifier M, acc(M): percentage of test set tuples that are
correctly
total
classified by the model M
7366 2634 1000 95.52

Error rate (misclassification rate) of M = 1 – acc(M) 0

Given m classes, CMi,j, an entry in a confusion matrix, indicates #
of tuples in class i that are labeled by the classifier as class j
 Alternative accuracy measures (e.g., for cancer diagnosis)
sensitivity = t-pos/pos /* true positive recognition rate */
specificity = t-neg/neg /* true negative recognition rate */
precision = t-pos/(t-pos + f-pos)
accuracy = (t-pos + t-neg) / N
= sensitivity * pos/(pos + neg) + specificity * neg/(pos + neg)
(N.B. N = pos+neg = total number of examples)


This model can also be used for cost-benefit analysis

May 15, 2025 114


Predictor Error Measures
 Measure predictor accuracy: measure how far off the predicted
value is from the actual known value
 Loss function: measures the error betw. yi and the predicted
value yi’
 Absolute error: | yi – yi’|
 Squared error: (yi – dyi’)2 d

  | yerror):
Test error (generalization
 y '|
i 1
i i  ( ythe
the average loss over
 y ')
test i 1
i i
2

set d d
d
d
 ( yi  yi ' ) 2
 Mean absolute error:  | y  y ' |
i 1
i i
Mean squared error:i 1
d d
| y  y |
i 1
i (y
i 1
i  y)2
 Relative absolute error: Relative squared error:

The mean squared-error exaggerates the presence of outliers


Popularly use (square) root mean-square error, similarly, root
May 15, 2025 115
Evaluating the Accuracy of a
Classifier or Predictor
 Holdout method
 Given data is randomly partitioned into two independent

sets

Training set (e.g., 2/3) for model construction

Test set (e.g., 1/3) for accuracy estimation
 Random sampling: a variation of holdout


Repeat holdout k times, accuracy = avg. of the
accuracies obtained
 Cross-validation (k-fold, where k = 10 is most popular)

 Randomly partition the data into k mutually exclusive

subsets, each approximately equal size


 At i-th iteration, use D as test set and others as training
i
set
 Leave-one-out: k folds where k = # of tuples, for small

sized data
May 15, Stratified cross-validation: folds are stratified so that class
 2025 116
Fig 6.29
Estimating accuracy with holdout
method

May 15, 2025 117


Chapter 6. Classification and
Prediction
 Support Vector Machines
What is classification? (SVM)
What is prediction?
 Associative classification
Issues regarding classification and prediction
 Lazy learners by
Classification (or decision
learning tree
frominduction
your neighbors)
 Other classification
Bayesian methods
classification
 Prediction
Rule-based classification
 Accuracy and by
Classification error measures
back propagation
 Ensemble methods
 Model selection
 Summary

May 15, 2025 118


Ensemble Methods: Increasing the
Accuracy

 Ensemble methods
 Use a combination of models to increase

accuracy
 Combine a series of k learned models, M , M , …,
1 2
Mk, with the aim of creating an improved model
M*
 Popular ensemble methods
 Bagging: averaging the prediction over a

collection of classifiers
 Boosting: weighted vote with a collection of
May 15, 2025 119
Bagging: Boostrap
Aggregation
 Analogy: Diagnosis based on multiple doctors’ majority vote
 Training
 Given a set D of d tuples, at each iteration i, a training set

Di of d tuples is sampled with replacement from D (i.e.,


boostrap)
 A classifier model M is learned for each training set D
i i
 Classification: classify an unknown sample X
 Each classifier M returns its class prediction
i
 The bagged classifier M* counts the votes and assigns the
class with the most votes to X
 Prediction: can be applied to the prediction of continuous

values by taking the average value of each prediction for a


given test tuple
 Accuracy

 Often significant better than a single classifier derived

from D
May 15, 2025 120
Boosting
 Analogy: Consult several doctors, based on a combination of
weighted diagnoses—weight assigned based on the previous
diagnosis accuracy
 How boosting works?
 Weights are assigned to each training tuple
 A series of k classifiers is iteratively learned
 After a classifier Mi is learned, the weights are updated to
allow the subsequent classifier, Mi+1, to pay more attention
to the training tuples that were misclassified by Mi
 The final M* combines the votes of each individual
classifier, where the weight of each classifier's vote is a
function of its accuracy
 The boosting algorithm can be extended for the prediction of
continuous values
Comparing with bagging: boosting tends to achieve greater
May 15, 2025 121
Chapter 6. Classification and
Prediction
 Support Vector Machines
What is classification? (SVM)
What is prediction?
 Associative classification
Issues regarding classification and prediction
 Lazy learners by
Classification (or decision
learning tree
frominduction
your neighbors)
 Other classification
Bayesian methods
classification
 Prediction
Rule-based classification
 Accuracy and by
Classification error measures
back propagation
 Ensemble methods
 Model selection
 Summary

May 15, 2025 122


Chapter 6. Classification and
Prediction
 Support Vector Machines
What is classification? (SVM)
What is prediction?
 Associative classification
Issues regarding classification and prediction
 Lazy learners by
Classification (or decision
learning tree
frominduction
your neighbors)
 Other classification
Bayesian methods
classification
 Prediction
Rule-based classification
 Accuracy and by
Classification error measures
back propagation
 Ensemble methods
 Model selection
 Summary

May 15, 2025 123


Summary (I)
 Classification and prediction are two forms of data analysis
that can be used to extract models describing important
data classes or to predict future data trends.
 Effective and scalable methods have been developed for
decision trees induction, Naive Bayesian classification,
Bayesian belief network, rule-based classifier,
Backpropagation, Support Vector Machine (SVM), associative
classification, nearest neighbor classifiers, and case-based
reasoning, and other classification methods such as genetic
algorithms, rough set and fuzzy set approaches.
 Linear, nonlinear, and generalized linear models of
regression can be used for prediction. Many nonlinear
problems can be converted to linear problems by performing
transformations on the predictor variables. Regression trees
May 15, 2025 124
Summary (II)
 Stratified k-fold cross-validation is a recommended method for
accuracy estimation. Bagging and boosting can be used to
increase overall accuracy by learning and combining a series
of individual models.
 Significance tests and ROC curves are useful for model
selection
 There have been numerous comparisons of the different
classification and prediction methods, and the matter remains
a research topic
 No single method has been found to be superior over all
others for all data sets
 Issues such as accuracy, training time, robustness,
interpretability, and scalability must be considered and can
May 15, 2025 125
References (1)
 C. Apte and S. Weiss. Data mining with decision trees and decision rules.
Future Generation Computer Systems, 13, 1997.
 C. M. Bishop, Neural Networks for Pattern Recognition. Oxford University
Press, 1995.
 L. Breiman, J. Friedman, R. Olshen, and C. Stone. Classification and
Regression Trees. Wadsworth International Group, 1984.
 C. J. C. Burges. A Tutorial on Support Vector Machines for Pattern
Recognition. Data Mining and Knowledge Discovery, 2(2): 121-168, 1998.
 P. K. Chan and S. J. Stolfo. Learning arbiter and combiner trees from
partitioned data for scaling machine learning. KDD'95.
 W. Cohen. Fast effective rule induction. ICML'95.
 G. Cong, K.-L. Tan, A. K. H. Tung, and X. Xu. Mining top-k covering rule
groups for gene expression data. SIGMOD'05.
 A. J. Dobson. An Introduction to Generalized Linear Models. Chapman
and Hall, 1990.
 G. Dong and J. Li. Efficient mining of emerging patterns: Discovering
trends and differences. KDD'99.

May 15, 2025 126


References (2)
 R. O. Duda, P. E. Hart, and D. G. Stork. Pattern Classification, 2ed. John Wiley
and Sons, 2001
 U. M. Fayyad. Branching on attribute values in decision tree generation.
AAAI’94.
 Y. Freund and R. E. Schapire. A decision-theoretic generalization of on-line
learning and an application to boosting. J. Computer and System
Sciences, 1997.
 J. Gehrke, R. Ramakrishnan, and V. Ganti. Rainforest: A framework for fast
decision tree construction of large datasets. VLDB’98.
 J. Gehrke, V. Gant, R. Ramakrishnan, and W.-Y. Loh, BOAT -- Optimistic
Decision Tree Construction. SIGMOD'99.
 T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical
Learning: Data Mining, Inference, and Prediction. Springer-Verlag, 2001.
 D. Heckerman, D. Geiger, and D. M. Chickering. Learning Bayesian
networks: The combination of knowledge and statistical data. Machine
Learning, 1995.
 M. Kamber, L. Winstone, W. Gong, S. Cheng, and J. Han. Generalization and
decision tree induction: Efficient classification in data mining. RIDE'97.
May B.
15,Liu,
2025W. Hsu, and Y. Ma. Integrating Classification and Association Rule. 127

References (3)
 T.-S. Lim, W.-Y. Loh, and Y.-S. Shih. A comparison of prediction accuracy,
complexity, and training time of thirty-three old and new
classification algorithms. Machine Learning, 2000.
 J. Magidson. The Chaid approach to segmentation modeling: Chi-
squared automatic interaction detection. In R. P. Bagozzi, editor,
Advanced Methods of Marketing Research, Blackwell Business, 1994.
 M. Mehta, R. Agrawal, and J. Rissanen. SLIQ : A fast scalable classifier for
data mining. EDBT'96.
 T. M. Mitchell. Machine Learning. McGraw Hill, 1997.
 S. K. Murthy, Automatic Construction of Decision Trees from Data: A
Multi-Disciplinary Survey, Data Mining and Knowledge Discovery 2(4): 345-
389, 1998
 J. R. Quinlan. Induction of decision trees. Machine Learning, 1:81-106,
1986.
 J. R. Quinlan and R. M. Cameron-Jones. FOIL: A midterm report. ECML’93.
 J. R. Quinlan. C4.5: Programs for Machine Learning. Morgan Kaufmann,
1993.
May 15, 2025 128
References (4)
 R. Rastogi and K. Shim. Public: A decision tree classifier that integrates
building and pruning. VLDB’98.
 J. Shafer, R. Agrawal, and M. Mehta. SPRINT : A scalable parallel classifier
for data mining. VLDB’96.
 J. W. Shavlik and T. G. Dietterich. Readings in Machine Learning. Morgan
Kaufmann, 1990.
 P. Tan, M. Steinbach, and V. Kumar. Introduction to Data Mining. Addison
Wesley, 2005.
 S. M. Weiss and C. A. Kulikowski. Computer Systems that Learn:
Classification and Prediction Methods from Statistics, Neural Nets,
Machine Learning, and Expert Systems. Morgan Kaufman, 1991.
 S. M. Weiss and N. Indurkhya. Predictive Data Mining. Morgan Kaufmann,
1997.
 I. H. Witten and E. Frank. Data Mining: Practical Machine Learning Tools
and Techniques, 2ed. Morgan Kaufmann, 2005.
 X. Yin and J. Han. CPAR: Classification based on predictive association
rules. SDM'03
 H. Yu, J. Yang, and J. Han. Classifying large data sets using SVM with
May hierarchical
15, 2025 clusters. KDD'03. 129
Thank You

May 15, 2025 130

You might also like