0% found this document useful (0 votes)
25 views55 pages

Slide 07 Chapter8 Classification Basic Concept

Chapter 8 discusses the concepts of classification in machine learning, distinguishing between supervised and unsupervised learning, and outlines the two-step process of model construction and usage for classification. It explains various algorithms for decision tree induction, including measures for attribute selection such as information gain, gain ratio, and Gini index, while also addressing the use of IF-THEN rules for classification. Additionally, it covers rule extraction from decision trees and sequential covering methods for rule induction.

Uploaded by

a19910207
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views55 pages

Slide 07 Chapter8 Classification Basic Concept

Chapter 8 discusses the concepts of classification in machine learning, distinguishing between supervised and unsupervised learning, and outlines the two-step process of model construction and usage for classification. It explains various algorithms for decision tree induction, including measures for attribute selection such as information gain, gain ratio, and Gini index, while also addressing the use of IF-THEN rules for classification. Additionally, it covers rule extraction from decision trees and sequential covering methods for rule induction.

Uploaded by

a19910207
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Chapter 8.

Classification:
Basic Concept
HUI-YIN CHANG (張彙音)
Supervised vs. Unsupervised Learning
Supervised learning (classification)
◦ Supervision: The training data (observations, measurements,
etc.) are accompanied by labels indicating the class of the
observations
◦ New data is classified based on the training set

Unsupervised learning (clustering)


◦ The class labels of training data is unknown
◦ Given a set of measurements, observations, etc. with the aim
of establishing the existence of classes or clusters in the data
Prediction Problems: Classification vs. Numeric Prediction
Classification
◦ predicts categorical class labels (discrete or nominal)
◦ classifies data (constructs a model) based on the training set
and the values (class labels) in a classifying attribute and uses
it in classifying new data
Numeric Prediction
◦ models continuous-valued functions, i.e., predicts unknown or
missing values
Typical applications
◦ Credit/loan approval:
◦ Medical diagnosis: if a tumor is cancerous or benign
◦ Fraud detection: if a transaction is fraudulent
◦ Web page categorization: which category it is
Classification—A Two-Step Process
Model construction: describing a set of predetermined classes
◦ Each tuple/sample is assumed to belong to a predefined class, as determined
by the class label attribute
◦ The set of tuples used for model construction is training set
◦ The model is represented as classification rules, decision trees, or
mathematical formulae
Model usage: for classifying future or unknown objects
◦ Estimate accuracy of the model
◦ The known label of test sample is compared with the classified result from
the model
◦ Accuracy rate is the percentage of test set samples that are correctly
classified by the model
◦ Test set is independent of training set (otherwise overfitting)
◦ If the accuracy is acceptable, use the model to classify new data
Note: If the test set is used to select models, it is called validation (test) set
Process (1): Model Construction
Classification
Algorithms
Training
Data

NAME RANK YEARS TENURED Classifier


M ike A ssistant P rof 3 no (Model)
M ary A ssistant P rof 7 yes
B ill P rofessor 2 yes
Jim A ssociate P rof 7 yes
IF rank = ‘professor’
D ave A ssistant P rof 6 no
OR years > 6
A nne A ssociate P rof 3 no
THEN tenured = ‘yes’
Process (2): Using the Model in Prediction

Classifier

Testing
Data Unseen Data

(Jeff, Professor, 4)
NAME RANK YEARS TENURED
T om A ssistant P rof 2 no Tenured?
M erlisa A ssociate P rof 7 no
G eorge P rofessor 5 yes
Joseph A ssistant P rof 7 yes
Decision Tree Induction: An Example
age income student credit_rating buys_computer
<=30 high no fair no
❑ Training data set: Buys_computer <=30 high no excellent no
31…40 high no fair yes
❑ The data set follows an example of >40 medium no fair yes
Quinlan’s ID3 (Playing Tennis) >40
>40
low
low
yes fair
yes excellent
yes
no
❑ Resulting tree: 31…40 low yes excellent yes
<=30 medium no fair no
age? <=30 low yes fair yes
>40 medium yes fair yes
<=30 medium yes excellent yes
31…40 medium no excellent yes
<=30 overcast
31..40 >40 31…40 high yes fair yes
>40 medium no excellent no

student? yes credit rating?

no yes excellent fair


Buys_computer
no yes yes
Algorithm for Decision Tree Induction
Basic algorithm (a greedy algorithm)
◦ Tree is constructed in a top-down recursive divide-and-conquer
manner
◦ At start, all the training examples are at the root
◦ Attributes are categorical (if continuous-valued, they are discretized
in advance)
◦ Examples are partitioned recursively based on selected attributes
◦ Test attributes are selected on the basis of a heuristic or statistical
measure (e.g., information gain)
Conditions for stopping partitioning
◦ All samples for a given node belong to the same class
◦ There are no remaining attributes for further partitioning –
majority voting is employed for classifying the leaf
◦ There are no samples left
Brief Review of Entropy

m=2
Attribute Selection Measure:
Information Gain (ID3/C4.5)
◼ Select the attribute with the highest information gain
◼ Let pi be the probability that an arbitrary tuple in D belongs to
class Ci, estimated by |Ci, D|/|D|
◼ Expected information (entropy) needed to classify a tuple in D:
m
Info( D) = − pi log 2 ( pi )
i =1
◼ Information needed (after using A to split D into v partitions) to
classify D: v | D |
Info A ( D) =   Info( D j )
j

j =1 | D |
◼ Information gained by branching on attribute A

Gain(A) = Info(D) − Info A(D)


age income student credit_rating buys_computer
<=30 high no fair no
<=30 high no excellent no
31…40 high no fair yes
>40 medium no fair yes
>40 low yes fair yes
>40 low yes excellent no
31…40 low yes excellent yes
<=30 medium no fair no
<=30 low yes fair yes
>40 medium yes fair yes
<=30 medium yes excellent yes
31…40 medium no excellent yes
31…40 high yes fair yes
>40 medium no excellent no
Attribute Selection: Information Gain
Class P: buys_computer = “yes” 5 4
Infoage ( D) = I (2,3) + I (4,0)
Class N: buys_computer = “no” 14 14
9 9 5 5 5
Info( D) = I (9,5) = − log 2 ( ) − log 2 ( ) =0.940 + I (3,2) = 0.694
14 14 14 14 14
age pi ni I(pi, ni) 5
I (2,3) means “age <=30” has 5 out of 14
<=30 2 3 0.971 14 samples, with 2 yes and 3 no.
31…40 4 0 0 Hence
>40 3 2 0.971 Gain(age) = Info( D) − Infoage ( D) = 0.246
age income student credit_rating buys_computer
<=30 high no fair no
<=30 high no excellent no
31…40 high no fair yes Similarly,
>40 medium no fair yes
Select the maximum gain
>40 low yes fair yes
>40
31…40
low
low
yes excellent
yes excellent
no
yes
Gain(income) = 0.029
<=30
<=30
medium
low
no fair
yes fair
no
yes Gain( student) = 0.151
Gain(credit _ rating ) = 0.048
>40 medium yes fair yes
<=30 medium yes excellent yes
31…40 medium no excellent yes
31…40 high yes fair yes
>40 medium no excellent no
Select “age” as the first splitting node
What about the next level?
age?
age income student credit_rating buys_computer
<=30 high no fair no
<=30 high no excellent no
<=30 medium no fair no
<=30 overcast
31..40 >40 <=30 low yes fair yes
<=30 medium yes excellent yes

buys_compute
student? yes credit rating? age income student credit_rating
r
>40 medium no fair yes
>40 low yes fair yes
no yes excellent fair >40 low yes excellent no
>40 medium yes fair yes
>40 medium no excellent no

no yes yes
Computing Information-Gain for
Continuous-Valued Attributes
Let attribute A be a continuous-valued attribute
Must determine the best split point for A
◦ Sort the value A in increasing order
◦ Typically, the midpoint between each pair of adjacent values is
considered as a possible split point
◦ (ai+ai+1)/2 is the midpoint between the values of ai and ai+1

◦ The point with the minimum expected information requirement


for A is selected as the split-point for A
Split:
◦ D1 is the set of tuples in D satisfying A ≤ split-point, and D2 is the
set of tuples in D satisfying A > split-point
Gain Ratio for Attribute Selection (C4.5)
Information gain measure is biased towards attributes with a large
number of values
◦ e.g., what is the information gain using user ID? (p.340)
C4.5 (a successor of ID3) uses gain ratio to overcome the problem
(normalization to information gain)v |D | | Dj |
SplitInfoA ( D) = −  log 2 (
j
)
j =1 | D | |D|

◦ GainRatio(A) = Gain(A)/SplitInfo(A)
Ex.

◦ gain_ratio(income) = 0.029/1.557 = 0.019


The attribute with the maximum gain ratio is selected as the splitting
attribute
Gini Index (CART, IBM IntelligentMiner)
If a data set D contains examples from n classes, gini index, gini(D)
is defined as

Good for
where pi is the relative frequency of class i in D binary tree
If a data set D is split on A into two subsets D1 and D2, the gini
index gini(D) is defined as |D | |D |
gini A (D) = 1 gini(D1) + 2 gini(D2)
|D| |D|
Reduction in Impurity:
gini( A) = gini(D) − giniA (D)
The attribute provides the smallest ginisplit(D) (or the largest
reduction in impurity) is chosen to split the node (need to
enumerate all the possible splitting points for each attribute)
Computation of Gini Index
Ex. D has 9 tuples in buys_computer = “yes” and 5 in “no”
2 2
9 5
gini( D) = 1 −   −   = 0.459
 14   14 
Suppose the attribute income partitions D into 10 in D1: {low,
medium} and 4 in D2 ( D) =
 10 
Gini( D ) +
4
gini income{low, medium}   Gini( D )
1   2
 14   14 

Gini{low,high} is 0.458; Gini{medium,high} is 0.450. Thus, split on the


{low,medium} (and {high}) since it has the lowest Gini index
All attributes are assumed continuous-valued
May need other tools, e.g., clustering, to get the possible split values
Can be modified for categorical attributes
Comparing Attribute Selection Measures

The three measures, in general, return good results but


◦ Information gain:
◦ biased towards multivalued attributes
◦ Gain ratio:
◦ tends to prefer unbalanced splits in which one partition is much smaller than the
others
◦ Gini index:
◦ biased to multivalued attributes
◦ has difficulty when # of classes is large
◦ tends to favor tests that result in equal-sized partitions and purity in both
partitions
Using IF-THEN Rules for Classification
Represent the knowledge in the form of IF-THEN rules
R: IF age = youth AND student = yes THEN buys_computer = yes
◦ Rule antecedent/precondition vs. rule consequent
Assessment of a rule: coverage and accuracy
◦ ncovers = # of tuples covered by R
◦ ncorrect = # of tuples correctly classified by R
coverage(R) = ncovers /|D| /* D: training data set */
accuracy(R) = ncorrect / ncovers
If more than one rule are triggered, need conflict resolution
◦ Size ordering: assign the highest priority to the triggering rules that has the
“toughest” requirement (i.e., with the most attribute tests)
◦ Class-based ordering: decreasing order of prevalence or misclassification cost
per class
◦ Rule-based ordering (decision list): rules are organized into one long priority
list, according to some measure of rule quality or by experts
Rule Extraction from a Decision Tree
◼ Rules are easier to understand than large
trees age?

◼ One rule is created for each path from the <=30 31..40 >40
root to a leaf
student? credit rating?
yes
◼ Each attribute-value pair along a path forms a
no yes excellent fair
conjunction: the leaf holds the class
no yes yes
prediction
◼ Rules are mutually exclusive and exhaustive
Example: Rule extraction from our buys_computer decision-tree
IF age = young AND student = no THEN buys_computer = no
IF age = young AND student = yes THEN buys_computer = yes
IF age = mid-age THEN buys_computer = yes
IF age = old AND credit_rating = excellent THEN buys_computer = no
IF age = old AND credit_rating = fair THEN buys_computer = yes
Rule Induction: Sequential Covering Method
Sequential covering algorithm: Extracts rules directly from training data
Typical sequential covering algorithms: FOIL, AQ, CN2, RIPPER
Rules are learned sequentially, each for a given class Ci will cover many
tuples of Ci but none (or few) of the tuples of other classes
Steps:
◦ Rules are learned one at a time
◦ Each time a rule is learned, the tuples covered by the rules are
removed
◦ Repeat the process on the remaining tuples until termination
condition, e.g., when no more training examples or when the quality
of a rule returned is below a user-specified threshold
Comp. w. decision-tree induction: learning a set of rules simultaneously
Other Attribute Selection Measures
CHAID: a popular decision tree algorithm, measure based on χ2 test for
independence

C-SEP: performs better than info. gain and gini index in certain cases

G-statistic: has a close approximation to χ2 distribution

MDL (Minimal Description Length) principle (i.e., the simplest solution is preferred):
◦ The best tree as the one that requires the fewest # of bits to both (1) encode the
tree, and (2) encode the exceptions to the tree

Multivariate splits (partition based on multiple variable combinations)


◦ CART: finds multivariate splits based on a linear comb. of attrs.

Which attribute selection measure is the best?


◦ Most give good results, none is significantly superior than others
Overfitting and Tree Pruning
Overfitting: An induced tree may overfit the training data
◦ Too many branches, some may reflect anomalies due to noise
or outliers
◦ Poor accuracy for unseen samples
Two approaches to avoid overfitting
◦ Prepruning: Halt tree construction early ̵ do not split a node if
this would result in the goodness measure falling below a
threshold
◦ Difficult to choose an appropriate threshold

◦ Postpruning: Remove branches from a “fully grown” tree—get


a sequence of progressively pruned trees
◦ Use a set of data different from the training data to decide which is the “best pruned tree”
Enhancements to Basic Decision Tree Induction

Allow for continuous-valued attributes


◦ Dynamically define new discrete-valued attributes that partition
the continuous attribute value into a discrete set of intervals
Handle missing attribute values
◦ Assign the most common value of the attribute
◦ Assign probability to each of the possible values
Attribute construction
◦ Create new attributes based on existing ones that are sparsely
represented
◦ This reduces fragmentation, repetition, and replication
Classification in Large Databases
Classification—a classical problem extensively studied by
statisticians and machine learning researchers
Scalability: Classifying data sets with millions of examples and
hundreds of attributes with reasonable speed
Why is decision tree induction popular?
◦ relatively faster learning speed (than other classification
methods)
◦ convertible to simple and easy to understand classification rules
◦ can use SQL queries for accessing databases
◦ comparable classification accuracy with other methods
RainForest (VLDB’98 — Gehrke, Ramakrishnan & Ganti)
◦ Builds an AVC-list (attribute, value, class label)
Scalability Framework for RainForest
Separates the scalability aspects from the criteria that
determine the quality of the tree

Builds an AVC-list: AVC (Attribute, Value, Class_label)

AVC-set (of an attribute X )


◦ Projection of training dataset onto the attribute X and class
label where counts of individual class label are aggregated

AVC-group (of a node n )


◦ Set of AVC-sets of all predictor attributes at the node n
Rainforest: Training Set and Its AVC Sets
Training Examples AVC-set on Age AVC-set on income
age income studentcredit_rating
buys_computerAge Buy_Computer income Buy_Computer

<=30 high no fair no yes no


<=30 high no excellent no yes no
high 2 2
31…40 high no fair yes <=30 2 3
31..40 4 0 medium 4 2
>40 medium no fair yes
>40 low yes fair yes >40 3 2 low 3 1
>40 low yes excellent no
31…40 low yes excellent yes
AVC-set on
<=30 medium no fair no AVC-set on Student
credit_rating
<=30 low yes fair yes
student Buy_Computer
>40 medium yes fair yes Credit
Buy_Computer

<=30 medium yes excellent yes yes no rating yes no


31…40 medium no excellent yes yes 6 1 fair 6 2
31…40 high yes fair yes no 3 4 excellent 3 3
>40 medium no excellent no
BOAT (Bootstrapped Optimistic Algorithm for
Tree Construction)
Use a statistical technique called bootstrapping to create
several smaller samples (subsets), each fits in memory

Each subset is used to create a tree, resulting in several trees

These trees are examined and used to construct a new tree T’


◦ It turns out that T’ is very close to the tree that would be
generated using the whole data set together

Adv: requires only two scans of DB, an incremental alg.

28
Presentation of Classification Results
Bayesian Classification: Why?
A statistical classifier: performs probabilistic prediction, i.e.,
predicts class membership probabilities
Foundation: Based on Bayes’ Theorem.
Performance: A simple Bayesian classifier, naïve Bayesian
classifier, has comparable performance with decision tree and
selected neural network classifiers
Incremental: Each training example can incrementally
increase/decrease the probability that a hypothesis is correct —
prior knowledge can be combined with observed data
Standard: Even when Bayesian methods are computationally
intractable, they can provide a standard of optimal decision
making against which other methods can be measured
貝式定理
⚫ 貝氏定理是關於隨機事件A和B的條件機率的一則定理。

 其中A以及B為隨機事件,且P(B)不為零。P(A|B)是指在事件B發生的情況下事件A發生的機率。
 在貝氏定理中,每個名詞都有約定俗成的名稱:
 P(A|B)是已知B發生後,A的條件機率。也稱作A的事後機率。
 P(A)是A的事前機率(或邊際機率)。其不考慮任何B方面的因素。
 P(B|A)是已知A發生後,B的條件機率。也可稱為B的事後機率。

⚫貝氏定理其實就是從『給定A事件已發生的條件,B事件發生的條件機率』轉變成『給定B
事件已發生的條件下,A事件發生的條件機率』的互換過程而已
⚫ 比如在已知一個人抽菸,計算他得肺癌的機率,假設為P(肺癌|抽菸),如果得肺癌的人越
多人有抽菸行為,也就是P(抽菸|肺癌)的值越大,則P(肺癌|抽菸)的值也會比較大。

Source: https://zh.m.wikipedia.org/zh-tw/%E8%B4%9D%E5%8F%B6%E6%96%AF%E5%AE%9A%E7%90%86
Source: https://medium.com/qiubingcheng/%E5%AF%A6%E4%BD%9C%E5%96%AE%E7%B4%94%E8%B2%9D%E6%B0%8F%E5%88%86%E9%A1%9E%E5%99%A8-
%E4%B8%A6%E6%87%89%E7%94%A8%E6%96%BC%E5%9E%83%E5%9C%BE%E8%A8%8A%E6%81%AF%E5%88%86%E9%A1%9E-6b26834c4fd8
Classification Is to Derive the Maximum Posteriori
Let D be a training set of tuples and their associated class labels,
and each tuple is represented by an n-D attribute vector X = (x1,
x2, …, xn)
Suppose there are m classes C1, C2, …, Cm.
Classification is to derive the maximum posteriori, i.e., the maximal
P(Ci|X)
This can be derived from Bayes’ theorem
P(X | C )P(C )
P(C | X) = i i
i P(X)

Since P(X) is constant for all classes, only


P(C | X) = P(X | C )P(C )
needs to be maximized i i i
Naïve Bayes Classifier
A simplified assumption: attributes are conditionally independent
(i.e., no dependence relation between attributes):
n
P( X | C i) =  P( x | C i) = P( x | C i)  P( x | C i)  ... P( x | C i)
k 1 2 n
k =1
This greatly reduces the computation cost: Only counts the class
distribution
If Ak is categorical, P(xk|Ci) is the # of tuples in Ci having value xk for
Ak divided by |Ci, D| (# of tuples of Ci in D)
If Ak is continuous-valued, P(xk|Ci) is usually computed based on
Gaussian distribution with a mean μ and standard deviation σ
( x− )2
1 −
g ( x,  ,  ) = e 2 2

and P(xk|Ci) is 2 

P(X | C i) = g ( xk , Ci ,  Ci )
Naïve Bayes Classifier: Training Dataset
age income studentcredit_rating
buys_computer
<=30 high no fair no
Class: <=30 high no excellent no
C1:buys_computer = ‘yes’ 31…40 high no fair yes
C2:buys_computer = ‘no’ >40 medium no fair yes
>40 low yes fair yes
Data to be classified: >40 low yes excellent no
31…40 low yes excellent yes
X = (age <=30,
<=30 medium no fair no
Income = medium, <=30 low yes fair yes
Student = yes >40 medium yes fair yes
Credit_rating = Fair) <=30 medium yes excellent yes
31…40 medium no excellent yes
31…40 high yes fair yes
>40 medium no excellent no
Naïve Bayes Classifier: An Example
age income studentcredit_rating
buys_compu
<=30 high no fair no
P(Ci): P(buys_computer = “yes”) = 9/14 = 0.643 <=30 high no excellent no
P(buys_computer = “no”) = 5/14= 0.357 31…40 high no fair yes
>40 medium no fair yes
Compute P(X|Ci) for each class >40 low yes fair yes
P(age = “<=30” | buys_computer = “yes”) = 2/9 = 0.222 >40 low yes excellent no
P(age = “<= 30” | buys_computer = “no”) = 3/5 = 0.6 31…40 low yes excellent yes
<=30 medium no fair no
P(income = “medium” | buys_computer = “yes”) = 4/9 = 0.444
<=30 low yes fair yes
P(income = “medium” | buys_computer = “no”) = 2/5 = 0.4 >40 medium yes fair yes
P(student = “yes” | buys_computer = “yes) = 6/9 = 0.667 <=30 medium yes excellent yes
P(student = “yes” | buys_computer = “no”) = 1/5 = 0.2 31…40 medium no excellent yes
P(credit_rating = “fair” | buys_computer = “yes”) = 6/9 = 0.667 31…40 high yes fair yes
>40 medium no excellent no
P(credit_rating = “fair” | buys_computer = “no”) = 2/5 = 0.4
X = (age <= 30 , income = medium, student = yes, credit_rating = fair)
P(X|Ci) : P(X|buys_computer = “yes”) = 0.222 x 0.444 x 0.667 x 0.667 = 0.044
P(X|buys_computer = “no”) = 0.6 x 0.4 x 0.2 x 0.4 = 0.019
P(X|Ci)*P(Ci) : P(X|buys_computer = “yes”) * P(buys_computer = “yes”) = 0.028
P(X|buys_computer = “no”) * P(buys_computer = “no”) = 0.007
Therefore, X belongs to class (“buys_computer = yes”)
Practice
X = (age = 30-40, income = medium, student = yes, credit_rating = fair)
P(age = “30-40” | buys_computer = “yes”) = 4/9
P(age = “30-40” | buys_computer = “no”) = 0/5
P(income = “medium” | buys_computer = “yes”) = 4/9 = 0.444
P(income = “medium” | buys_computer = “no”) = 2/5 = 0.4
P(student = “yes” | buys_computer = “yes) = 6/9 = 0.667
P(student = “yes” | buys_computer = “no”) = 1/5 = 0.2
P(credit_rating = “fair” | buys_computer = “yes”) = 6/9 = 0.667
P(credit_rating = “fair” | buys_computer = “no”) = 2/5 = 0.4

P(X|Ci) : P(X|buys_computer = “yes”) = (4/9) x 0.444 x 0.667 x 0.667


P(X|buys_computer = “no”) = (0/5) x 0.4 x 0.2 x 0.4 =0
P(X|Ci)*P(Ci) : P(X|buys_computer = “yes”) * P(buys_computer = “yes”)
P(X|buys_computer = “no”) * P(buys_computer = “no”) = 0 * 0.357
Avoiding the Zero-Probability Problem
Naïve Bayesian prediction requires each conditional prob. be non-
zero. Otherwise, the predicted prob. will be zero
n
P( X | C i ) =  P( x k | C i )
k =1

Ex. Suppose a dataset with 1000 tuples, income=low (0), income=


medium (990), and income = high (10)
Use Laplacian correction (or Laplacian estimator)
◦ Adding 1 to each case
Prob(income = low) = 1/1003
Prob(income = medium) = 991/1003
Prob(income = high) = 11/1003

◦ The “corrected” prob. estimates are close to their “uncorrected”


counterparts
Naïve Bayes Classifier: Comments
Advantages
◦ Easy to implement
◦ Good results obtained in most of the cases
Disadvantages
◦ Assumption: class conditional independence, therefore loss of
accuracy
◦ Practically, dependencies exist among variables
◦ E.g., hospitals: patients: Profile: age, family history, etc.

Symptoms: fever, cough etc., Disease: lung cancer, diabetes, etc.


◦ Dependencies among these cannot be modeled by Naïve Bayes Classifier

How to deal with these dependencies? Bayesian Belief Networks


(Chapter 9)
Model Evaluation and Selection
Evaluation metrics: How can we measure accuracy? Other metrics to consider?
Use validation test set of class-labeled tuples instead of training set when
assessing accuracy
Methods for estimating a classifier’s accuracy:
◦ Holdout method, random subsampling
◦ Cross-validation
◦ Bootstrap
Comparing classifiers:
◦ Confidence intervals
◦ Cost-benefit analysis and ROC Curves

39
Classifier Evaluation Metrics: Confusion Matrix
Confusion Matrix:
Actual class\Predicted class C1 ¬ C1
C1 True Positives (TP) False Negatives (FN)
¬ C1 False Positives (FP) True Negatives (TN)

Example of Confusion Matrix:


Actual class\Predicted buy_computer buy_computer Total
class = yes = no
buy_computer = yes 6954 46 7000
buy_computer = no 412 2588 3000
Total 7366 2634 10000

Given m classes, an entry, CMi,j in a confusion matrix indicates # of


tuples in class i that were labeled by the classifier as class j
May have extra rows/columns to provide totals
Classifier Evaluation Metrics: Accuracy,
Error Rate, Sensitivity and Specificity
A\P C ¬C ◼ Class Imbalance Problem:
C TP FN P
◼ One class may be rare, e.g.
¬C FP TN N
fraud, or HIV-positive
P’ N’ All
◼ Significant majority of the

Classifier Accuracy, or recognition negative class and minority of


rate: percentage of test set tuples the positive class
that are correctly classified ◼ Sensitivity: True Positive
Accuracy = (TP + TN)/All recognition rate
Error rate: 1 – accuracy, or ◼ Sensitivity = TP/P

Error rate = (FP + FN)/All ◼ Specificity: True Negative

recognition rate
◼ Specificity = TN/N

41
Classifier Evaluation Metrics: Accuracy,
Error Rate, Sensitivity and Specificity
Actual Class\Predicted class cancer = yes cancer = no Total
cancer = yes 90 210 300
cancer = no 140 9560 9700
Total 230 9770 10000

What is the accuracy, error rate, sensitivity, and specificity?

Accuracy = (TP + TN)/All


Error rate = (FP + FN)/All
Sensitivity = TP/P
Specificity = TN/N
Classifier Evaluation Metrics:
Precision and Recall, and F-measures
Precision: exactness – what % of tuples that the classifier labeled
as positive are actually positive

Recall: completeness – what % of positive tuples did the classifier


label as positive?
Perfect score is 1.0
Inverse relationship between precision & recall
F measure (F1 or F-score): harmonic mean of precision and recall,

Fß : weighted measure of precision and recall


◦ assigns ß times as much weight to recall as to precision

43
Classifier Evaluation Metrics: Accuracy,
Error Rate, Sensitivity and Specificity
Actual Class\Predicted class cancer = yes cancer = no Total Recognition(%)
cancer = yes 90 210 300 30.00 (sensitivity
cancer = no 140 9560 9700 98.56 (specificity)
Total 230 9770 10000 96.40 (accuracy)

What is the precision, recall and F-score?

◦ Precision = 90/230 = 39.13%


Recall = 90/300 = 30.00%
Evaluating Classifier Accuracy
⚫ Dividing a dataset

⚫ Holdout
⚫ K-fold
⚫ Leave one out cross validation
⚫ Random subsampling
⚫ Bootstrap

Source: https://ithelp.ithome.com.tw/articles/10278851
Holdout cross validation

優點:

⚫ 簡單實作。
⚫ 驗證集可以被拿來評估模型在訓練過程中的學習成果。
⚫ 測試集可以評估模型泛化能力。

缺點:
⚫ 當資料集變異量較大時,驗證集與測試集可能無法足以評估模型。
⚫ 不適合用在資料不平衡的資料集。

Source: https://ithelp.ithome.com.tw/articles/10278851
K-fold cross validation

優點:
⚫ 降低模型訓練對於資料集的偏差。
⚫ 訓練集與驗證集完整被充分利用與學習。

缺點:
⚫ 不適合用於資料不平衡的資料集。
⚫ 如果要簡單的 K-fold 來尋找超參數會有資料洩漏問題導致訓練結果有偏差,因為在每個 Fold 中都會使用同一組資
料進行驗證。
⚫ 在相同的驗證集計算模型的誤差,當找到了最佳的超參數。這可能會導致重大偏差,有過擬合擬合疑慮。
Source: https://ithelp.ithome.com.tw/articles/10278851
Leave-One-Out cross validation
⚫ 優點:
簡單且容易理解,好實作。

⚫ 缺點:
需要花費更多的訓練時間。

Source: https://ithelp.ithome.com.tw/articles/10278851
Bootstrapping cross validation

⚫ 是一種從給定訓練集中有放回的均勻抽樣,也就是說,每當選中一個樣本,它等可能地
被再次選中並被再次添加到訓練集中。假設每次訓練都採樣十個樣本,在這十筆資料中很
有可能會再次被隨機抽到。剩下沒有抽到的資料則都變成測試集,用來評估訓練完的模型。

Source: https://ithelp.ithome.com.tw/articles/10278851
Example: ROC curve

https://openai.com/blog/chatgpt/
Model Selection: ROC Curves
⚫ ROC: Receiver operator characteristic /接收操作特徵圖
⚫ 在各種『決策門檻』(decision threshold)下,比較
『真陽率』(True Positive Rate;TPR)與『假陽率』
(False Positive Rate;FPR)間的變化。

A\P C ¬C True positive rate=TP/(TP+FN)=TP/P


C TP FN P
¬C FP TN N False positive rate=FP/(FP+TN)=FP/N
P’ N’ All

◼ Vertical axis represents the true positive rate


◼ Horizontal axis rep. the false positive rate
◼ The plot also shows a diagonal line
◼ A model with perfect accuracy will have an area of 1.0

Source: https://ithelp.ithome.com.tw/articles/10229049
51
Model Selection: ROC Curves
我們從二分類的分配(分佈)角度來看,如果調整決策門檻,則TP(真陽)、FP(假陽)會随之變化。

Source: https://ithelp.ithome.com.tw/articles/10229049
True positive rate=TP/(TP+FN)=TP/P A\P C ¬C
C TP FN P
False positive rate=FP/(FP+TN)=FP/N ¬C FP TN N
Index P’ N’ All
0.2
0.4
0.2
0.4
0.6
0.4
0.2
0
1 0 0.2
0
Step:
1. Sort the probability in the descending order
2. Compute TP, FP, TN, and FN
3. Compute TPR and FPR

- 可使用 Youden’s index 決定最佳閾值


- Youden’s index = TPR-FPR=TPR-Specifity+1

Source: https://zhuanlan.zhihu.com/p/147919317
Issues Affecting Model Selection
Accuracy
◦ classifier accuracy: predicting class label
Speed
◦ time to construct the model (training time)
◦ time to use the model (classification/prediction time)
Robustness: handling noise and missing values
Scalability: efficiency in disk-resident databases
Interpretability
◦ understanding and insight provided by the model
Other measures, e.g., goodness of rules, such as decision tree size or
compactness of classification rules

54
Thanks for Your Attention
Q&A

You might also like