Slide 07 Chapter8 Classification Basic Concept
Slide 07 Chapter8 Classification Basic Concept
Classification:
Basic Concept
HUI-YIN CHANG (張彙音)
Supervised vs. Unsupervised Learning
Supervised learning (classification)
◦ Supervision: The training data (observations, measurements,
etc.) are accompanied by labels indicating the class of the
observations
◦ New data is classified based on the training set
Classifier
Testing
Data Unseen Data
(Jeff, Professor, 4)
NAME RANK YEARS TENURED
T om A ssistant P rof 2 no Tenured?
M erlisa A ssociate P rof 7 no
G eorge P rofessor 5 yes
Joseph A ssistant P rof 7 yes
Decision Tree Induction: An Example
age income student credit_rating buys_computer
<=30 high no fair no
❑ Training data set: Buys_computer <=30 high no excellent no
31…40 high no fair yes
❑ The data set follows an example of >40 medium no fair yes
Quinlan’s ID3 (Playing Tennis) >40
>40
low
low
yes fair
yes excellent
yes
no
❑ Resulting tree: 31…40 low yes excellent yes
<=30 medium no fair no
age? <=30 low yes fair yes
>40 medium yes fair yes
<=30 medium yes excellent yes
31…40 medium no excellent yes
<=30 overcast
31..40 >40 31…40 high yes fair yes
>40 medium no excellent no
m=2
Attribute Selection Measure:
Information Gain (ID3/C4.5)
◼ Select the attribute with the highest information gain
◼ Let pi be the probability that an arbitrary tuple in D belongs to
class Ci, estimated by |Ci, D|/|D|
◼ Expected information (entropy) needed to classify a tuple in D:
m
Info( D) = − pi log 2 ( pi )
i =1
◼ Information needed (after using A to split D into v partitions) to
classify D: v | D |
Info A ( D) = Info( D j )
j
j =1 | D |
◼ Information gained by branching on attribute A
buys_compute
student? yes credit rating? age income student credit_rating
r
>40 medium no fair yes
>40 low yes fair yes
no yes excellent fair >40 low yes excellent no
>40 medium yes fair yes
>40 medium no excellent no
no yes yes
Computing Information-Gain for
Continuous-Valued Attributes
Let attribute A be a continuous-valued attribute
Must determine the best split point for A
◦ Sort the value A in increasing order
◦ Typically, the midpoint between each pair of adjacent values is
considered as a possible split point
◦ (ai+ai+1)/2 is the midpoint between the values of ai and ai+1
◦ GainRatio(A) = Gain(A)/SplitInfo(A)
Ex.
Good for
where pi is the relative frequency of class i in D binary tree
If a data set D is split on A into two subsets D1 and D2, the gini
index gini(D) is defined as |D | |D |
gini A (D) = 1 gini(D1) + 2 gini(D2)
|D| |D|
Reduction in Impurity:
gini( A) = gini(D) − giniA (D)
The attribute provides the smallest ginisplit(D) (or the largest
reduction in impurity) is chosen to split the node (need to
enumerate all the possible splitting points for each attribute)
Computation of Gini Index
Ex. D has 9 tuples in buys_computer = “yes” and 5 in “no”
2 2
9 5
gini( D) = 1 − − = 0.459
14 14
Suppose the attribute income partitions D into 10 in D1: {low,
medium} and 4 in D2 ( D) =
10
Gini( D ) +
4
gini income{low, medium} Gini( D )
1 2
14 14
◼ One rule is created for each path from the <=30 31..40 >40
root to a leaf
student? credit rating?
yes
◼ Each attribute-value pair along a path forms a
no yes excellent fair
conjunction: the leaf holds the class
no yes yes
prediction
◼ Rules are mutually exclusive and exhaustive
Example: Rule extraction from our buys_computer decision-tree
IF age = young AND student = no THEN buys_computer = no
IF age = young AND student = yes THEN buys_computer = yes
IF age = mid-age THEN buys_computer = yes
IF age = old AND credit_rating = excellent THEN buys_computer = no
IF age = old AND credit_rating = fair THEN buys_computer = yes
Rule Induction: Sequential Covering Method
Sequential covering algorithm: Extracts rules directly from training data
Typical sequential covering algorithms: FOIL, AQ, CN2, RIPPER
Rules are learned sequentially, each for a given class Ci will cover many
tuples of Ci but none (or few) of the tuples of other classes
Steps:
◦ Rules are learned one at a time
◦ Each time a rule is learned, the tuples covered by the rules are
removed
◦ Repeat the process on the remaining tuples until termination
condition, e.g., when no more training examples or when the quality
of a rule returned is below a user-specified threshold
Comp. w. decision-tree induction: learning a set of rules simultaneously
Other Attribute Selection Measures
CHAID: a popular decision tree algorithm, measure based on χ2 test for
independence
C-SEP: performs better than info. gain and gini index in certain cases
MDL (Minimal Description Length) principle (i.e., the simplest solution is preferred):
◦ The best tree as the one that requires the fewest # of bits to both (1) encode the
tree, and (2) encode the exceptions to the tree
28
Presentation of Classification Results
Bayesian Classification: Why?
A statistical classifier: performs probabilistic prediction, i.e.,
predicts class membership probabilities
Foundation: Based on Bayes’ Theorem.
Performance: A simple Bayesian classifier, naïve Bayesian
classifier, has comparable performance with decision tree and
selected neural network classifiers
Incremental: Each training example can incrementally
increase/decrease the probability that a hypothesis is correct —
prior knowledge can be combined with observed data
Standard: Even when Bayesian methods are computationally
intractable, they can provide a standard of optimal decision
making against which other methods can be measured
貝式定理
⚫ 貝氏定理是關於隨機事件A和B的條件機率的一則定理。
其中A以及B為隨機事件,且P(B)不為零。P(A|B)是指在事件B發生的情況下事件A發生的機率。
在貝氏定理中,每個名詞都有約定俗成的名稱:
P(A|B)是已知B發生後,A的條件機率。也稱作A的事後機率。
P(A)是A的事前機率(或邊際機率)。其不考慮任何B方面的因素。
P(B|A)是已知A發生後,B的條件機率。也可稱為B的事後機率。
⚫貝氏定理其實就是從『給定A事件已發生的條件,B事件發生的條件機率』轉變成『給定B
事件已發生的條件下,A事件發生的條件機率』的互換過程而已
⚫ 比如在已知一個人抽菸,計算他得肺癌的機率,假設為P(肺癌|抽菸),如果得肺癌的人越
多人有抽菸行為,也就是P(抽菸|肺癌)的值越大,則P(肺癌|抽菸)的值也會比較大。
Source: https://zh.m.wikipedia.org/zh-tw/%E8%B4%9D%E5%8F%B6%E6%96%AF%E5%AE%9A%E7%90%86
Source: https://medium.com/qiubingcheng/%E5%AF%A6%E4%BD%9C%E5%96%AE%E7%B4%94%E8%B2%9D%E6%B0%8F%E5%88%86%E9%A1%9E%E5%99%A8-
%E4%B8%A6%E6%87%89%E7%94%A8%E6%96%BC%E5%9E%83%E5%9C%BE%E8%A8%8A%E6%81%AF%E5%88%86%E9%A1%9E-6b26834c4fd8
Classification Is to Derive the Maximum Posteriori
Let D be a training set of tuples and their associated class labels,
and each tuple is represented by an n-D attribute vector X = (x1,
x2, …, xn)
Suppose there are m classes C1, C2, …, Cm.
Classification is to derive the maximum posteriori, i.e., the maximal
P(Ci|X)
This can be derived from Bayes’ theorem
P(X | C )P(C )
P(C | X) = i i
i P(X)
and P(xk|Ci) is 2
P(X | C i) = g ( xk , Ci , Ci )
Naïve Bayes Classifier: Training Dataset
age income studentcredit_rating
buys_computer
<=30 high no fair no
Class: <=30 high no excellent no
C1:buys_computer = ‘yes’ 31…40 high no fair yes
C2:buys_computer = ‘no’ >40 medium no fair yes
>40 low yes fair yes
Data to be classified: >40 low yes excellent no
31…40 low yes excellent yes
X = (age <=30,
<=30 medium no fair no
Income = medium, <=30 low yes fair yes
Student = yes >40 medium yes fair yes
Credit_rating = Fair) <=30 medium yes excellent yes
31…40 medium no excellent yes
31…40 high yes fair yes
>40 medium no excellent no
Naïve Bayes Classifier: An Example
age income studentcredit_rating
buys_compu
<=30 high no fair no
P(Ci): P(buys_computer = “yes”) = 9/14 = 0.643 <=30 high no excellent no
P(buys_computer = “no”) = 5/14= 0.357 31…40 high no fair yes
>40 medium no fair yes
Compute P(X|Ci) for each class >40 low yes fair yes
P(age = “<=30” | buys_computer = “yes”) = 2/9 = 0.222 >40 low yes excellent no
P(age = “<= 30” | buys_computer = “no”) = 3/5 = 0.6 31…40 low yes excellent yes
<=30 medium no fair no
P(income = “medium” | buys_computer = “yes”) = 4/9 = 0.444
<=30 low yes fair yes
P(income = “medium” | buys_computer = “no”) = 2/5 = 0.4 >40 medium yes fair yes
P(student = “yes” | buys_computer = “yes) = 6/9 = 0.667 <=30 medium yes excellent yes
P(student = “yes” | buys_computer = “no”) = 1/5 = 0.2 31…40 medium no excellent yes
P(credit_rating = “fair” | buys_computer = “yes”) = 6/9 = 0.667 31…40 high yes fair yes
>40 medium no excellent no
P(credit_rating = “fair” | buys_computer = “no”) = 2/5 = 0.4
X = (age <= 30 , income = medium, student = yes, credit_rating = fair)
P(X|Ci) : P(X|buys_computer = “yes”) = 0.222 x 0.444 x 0.667 x 0.667 = 0.044
P(X|buys_computer = “no”) = 0.6 x 0.4 x 0.2 x 0.4 = 0.019
P(X|Ci)*P(Ci) : P(X|buys_computer = “yes”) * P(buys_computer = “yes”) = 0.028
P(X|buys_computer = “no”) * P(buys_computer = “no”) = 0.007
Therefore, X belongs to class (“buys_computer = yes”)
Practice
X = (age = 30-40, income = medium, student = yes, credit_rating = fair)
P(age = “30-40” | buys_computer = “yes”) = 4/9
P(age = “30-40” | buys_computer = “no”) = 0/5
P(income = “medium” | buys_computer = “yes”) = 4/9 = 0.444
P(income = “medium” | buys_computer = “no”) = 2/5 = 0.4
P(student = “yes” | buys_computer = “yes) = 6/9 = 0.667
P(student = “yes” | buys_computer = “no”) = 1/5 = 0.2
P(credit_rating = “fair” | buys_computer = “yes”) = 6/9 = 0.667
P(credit_rating = “fair” | buys_computer = “no”) = 2/5 = 0.4
39
Classifier Evaluation Metrics: Confusion Matrix
Confusion Matrix:
Actual class\Predicted class C1 ¬ C1
C1 True Positives (TP) False Negatives (FN)
¬ C1 False Positives (FP) True Negatives (TN)
recognition rate
◼ Specificity = TN/N
41
Classifier Evaluation Metrics: Accuracy,
Error Rate, Sensitivity and Specificity
Actual Class\Predicted class cancer = yes cancer = no Total
cancer = yes 90 210 300
cancer = no 140 9560 9700
Total 230 9770 10000
43
Classifier Evaluation Metrics: Accuracy,
Error Rate, Sensitivity and Specificity
Actual Class\Predicted class cancer = yes cancer = no Total Recognition(%)
cancer = yes 90 210 300 30.00 (sensitivity
cancer = no 140 9560 9700 98.56 (specificity)
Total 230 9770 10000 96.40 (accuracy)
⚫ Holdout
⚫ K-fold
⚫ Leave one out cross validation
⚫ Random subsampling
⚫ Bootstrap
Source: https://ithelp.ithome.com.tw/articles/10278851
Holdout cross validation
優點:
⚫ 簡單實作。
⚫ 驗證集可以被拿來評估模型在訓練過程中的學習成果。
⚫ 測試集可以評估模型泛化能力。
缺點:
⚫ 當資料集變異量較大時,驗證集與測試集可能無法足以評估模型。
⚫ 不適合用在資料不平衡的資料集。
Source: https://ithelp.ithome.com.tw/articles/10278851
K-fold cross validation
優點:
⚫ 降低模型訓練對於資料集的偏差。
⚫ 訓練集與驗證集完整被充分利用與學習。
缺點:
⚫ 不適合用於資料不平衡的資料集。
⚫ 如果要簡單的 K-fold 來尋找超參數會有資料洩漏問題導致訓練結果有偏差,因為在每個 Fold 中都會使用同一組資
料進行驗證。
⚫ 在相同的驗證集計算模型的誤差,當找到了最佳的超參數。這可能會導致重大偏差,有過擬合擬合疑慮。
Source: https://ithelp.ithome.com.tw/articles/10278851
Leave-One-Out cross validation
⚫ 優點:
簡單且容易理解,好實作。
⚫ 缺點:
需要花費更多的訓練時間。
Source: https://ithelp.ithome.com.tw/articles/10278851
Bootstrapping cross validation
⚫ 是一種從給定訓練集中有放回的均勻抽樣,也就是說,每當選中一個樣本,它等可能地
被再次選中並被再次添加到訓練集中。假設每次訓練都採樣十個樣本,在這十筆資料中很
有可能會再次被隨機抽到。剩下沒有抽到的資料則都變成測試集,用來評估訓練完的模型。
Source: https://ithelp.ithome.com.tw/articles/10278851
Example: ROC curve
https://openai.com/blog/chatgpt/
Model Selection: ROC Curves
⚫ ROC: Receiver operator characteristic /接收操作特徵圖
⚫ 在各種『決策門檻』(decision threshold)下,比較
『真陽率』(True Positive Rate;TPR)與『假陽率』
(False Positive Rate;FPR)間的變化。
Source: https://ithelp.ithome.com.tw/articles/10229049
51
Model Selection: ROC Curves
我們從二分類的分配(分佈)角度來看,如果調整決策門檻,則TP(真陽)、FP(假陽)會随之變化。
Source: https://ithelp.ithome.com.tw/articles/10229049
True positive rate=TP/(TP+FN)=TP/P A\P C ¬C
C TP FN P
False positive rate=FP/(FP+TN)=FP/N ¬C FP TN N
Index P’ N’ All
0.2
0.4
0.2
0.4
0.6
0.4
0.2
0
1 0 0.2
0
Step:
1. Sort the probability in the descending order
2. Compute TP, FP, TN, and FN
3. Compute TPR and FPR
Source: https://zhuanlan.zhihu.com/p/147919317
Issues Affecting Model Selection
Accuracy
◦ classifier accuracy: predicting class label
Speed
◦ time to construct the model (training time)
◦ time to use the model (classification/prediction time)
Robustness: handling noise and missing values
Scalability: efficiency in disk-resident databases
Interpretability
◦ understanding and insight provided by the model
Other measures, e.g., goodness of rules, such as decision tree size or
compactness of classification rules
54
Thanks for Your Attention
Q&A