0% found this document useful (0 votes)
330 views78 pages

Classification & Prediction

The document discusses classification and prediction in data mining. It defines classification as predicting categorical class labels by analyzing training data, while prediction models continuous values. Common applications are credit approval, targeting marketing, and medical diagnosis. The document outlines the classification process as model construction using a training dataset, and then model usage to classify new data. It also discusses decision tree classification, which partitions data into branches and leaf nodes based on attribute tests.

Uploaded by

biswanath dehuri
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
330 views78 pages

Classification & Prediction

The document discusses classification and prediction in data mining. It defines classification as predicting categorical class labels by analyzing training data, while prediction models continuous values. Common applications are credit approval, targeting marketing, and medical diagnosis. The document outlines the classification process as model construction using a training dataset, and then model usage to classify new data. It also discusses decision tree classification, which partitions data into branches and leaf nodes based on attribute tests.

Uploaded by

biswanath dehuri
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 78

Classification & Prediction

October 27, 2020 Data Mining: Concepts and Techniques 1


Classification and Prediction

 What is classification? What is prediction?


 Issues regarding classification and prediction
 Classification by decision tree induction
 Bayesian Classification
 Classification by back propagation
 Classification based on concepts from association rule
mining
 Other Classification Methods
 Prediction
 Classification accuracy
 Summary

October 27, 2020 Data Mining: Concepts and Techniques 2


Classification vs. Prediction
 Classification:
 Predicts categorical class labels

 Classifies data (constructs a model) based on the training

set and the values (class labels) in a classifying attribute


and uses it in classifying new data
 Prediction:
 models continuous-valued functions, i.e., predicts

unknown or missing values


 Typical Applications
 Credit approval

 Target marketing

 Medical diagnosis

 Treatment effectiveness analysis

October 27, 2020 Data Mining: Concepts and Techniques 3


Classification—A Two-Step Process

 Model construction: describing a set of predetermined classes


 Each tuple/sample is assumed to belong to a predefined class,

as determined by the class label attribute


 The set of tuples used for model construction: training set

 The model is represented as classification rules, decision trees,

or mathematical formulae
 Model usage: for classifying future or unknown objects
 Estimate accuracy of the model

 The known label of test sample is compared with the

classified result from the model


 Accuracy rate is the percentage of test set samples that are

correctly classified by the model


 Test set is independent of training set, otherwise over-fitting

will occur
October 27, 2020 Data Mining: Concepts and Techniques 4
Classification Process (1): Model Construction

Classification
Algorithms
Training
Data

NAME RANK YEARS TENURED Classifier


Mike Assistant Prof 3 no (Model)
Mary Assistant Prof 7 yes
Bill Professor 2 yes
Jim Associate Prof 7 yes IF rank = ‘professor’
Dave Assistant Prof 6 no
OR years > 6
Anne Associate Prof 3 no
THEN tenured = ‘yes’
October 27, 2020 Data Mining: Concepts and Techniques 5
Classification Process (2): Use the Model in Prediction

Classifier

Testing
Data Unseen Data

(Jeff, Professor, 4)
NAME RANK YEARS TENURED
Tom Assistant Prof 2 no Tenured?
Merlisa Associate Prof 7 no
George Professor 5 yes
Joseph Assistant Prof 7 yes
October 27, 2020 Data Mining: Concepts and Techniques 6
Supervised vs. Unsupervised Learning

 Supervised learning (Classification)


 Supervision: The training data (observations,
measurements, etc.) are accompanied by labels
indicating the class of the observations
 New data is classified based on the training set
 Unsupervised learning (Clustering)
 The class labels of training data is unknown
 Given a set of measurements, observations, etc. with
the aim of establishing the existence of classes or
clusters in the data
October 27, 2020 Data Mining: Concepts and Techniques 7
Classification and Prediction

 What is classification? What is prediction?


 Issues regarding classification and prediction
 Classification by decision tree induction
 Bayesian Classification
 Classification by backpropagation
 Classification based on concepts from association rule
mining
 Other Classification Methods
 Prediction
 Classification accuracy
 Summary

October 27, 2020 Data Mining: Concepts and Techniques 8


Issues regarding classification and
prediction (1): Data Preparation

 Data cleaning
 Preprocess data in order to reduce noise and handle
missing values
 Relevance analysis (feature selection)
 Remove the irrelevant or redundant attributes
 Data transformation
 Generalize and/or normalize data

October 27, 2020 Data Mining: Concepts and Techniques 9


Issues regarding classification and prediction
(2): Evaluating Classification Methods

 Predictive accuracy
 Speed and scalability
 time to construct the model

 time to use the model

 Robustness
 handling noise and missing values

 Scalability
 efficiency in disk-resident databases

 Interpretability:
 understanding and insight provded by the model

 Goodness of rules
 decision tree size

 compactness of classification rules

October 27, 2020 Data Mining: Concepts and Techniques 10


Chapter 7. Classification and
Prediction
 What is classification? What is prediction?
 Issues regarding classification and prediction
 Classification by decision tree induction
 Bayesian Classification
 Classification by backpropagation
 Classification based on concepts from association rule
mining
 Other Classification Methods
 Prediction
 Classification accuracy
 Summary

October 27, 2020 Data Mining: Concepts and Techniques 11


Classification by Decision Tree
Induction
 Decision tree
 A flow-chart-like tree structure

 Internal node denotes a test on an attribute

 Branch represents an outcome of the test

 Leaf nodes represent class labels or class distribution

 Decision tree generation consists of two phases


 Tree construction

 At start, all the training examples are at the root

 Partition examples recursively based on selected attributes

 Tree pruning

 Identify and remove branches that reflect noise or outliers

 Use of decision tree: Classifying an unknown sample


 Test the attribute values of the sample against the decision tree

October 27, 2020 Data Mining: Concepts and Techniques 12


Decision Tree Example

For example, you may visit a doctor and your doctor may ask you to
describe your symptoms. You respond by saying you have a stuffy nose.
In trying to diagnose your condition the doctor may ask you further
questions such as whether you are suffering from extreme exhaustion.
Answering yes would suggest you have the flu, where as answering no
would suggest you have a cold. This line of questioning is common to
many decision making processes and can be shown visually as a decision
tree,

Decision Tree for the diagnosis of cold and Flu


October 27, 2020 Data Mining: Concepts and Techniques 13
Decision Tree Example

Decision tree generated from a data set of cars

October 27, 2020 Data Mining: Concepts and Techniques 14


Decision Tree Example

October 27, 2020 Data Mining: Concepts and Techniques 15


Reasons for using Decision Tree

There are many reasons to use decision trees:

 Easy to understand: Decision trees are widely used to explain how


decisions are reached based on multiple criteria.
 Categorical and continuous variables: Decision trees can be
generated using either categorical data or continuous data.
 Complex relationships: A decision tree can partition a data set into
distinct regions based on ranges or specific values.

October 27, 2020 Data Mining: Concepts and Techniques 16


Disadvantages of Decision Tree

 Computationally expensive: Building decision trees can be computationally


expensive, particularly when analyzing a large data set with many
continuous variables.

 Difficult to optimize: Generating a useful decision tree automatically can


be challenging, since large and complex trees can be easily generated.
Trees that are too small may not capture enough information. Generating
the ‘best’ tree through optimization is difficult.

October 27, 2020 Data Mining: Concepts and Techniques 17


Training Dataset

age income student credit_rating


This <=30 high no fair
<=30 high no excellent
follows 31…40 high no fair
an >40 medium no fair
example >40 low yes fair
>40 low yes excellent
from 31…40 low yes excellent
Quinlan’s <=30 medium no fair
ID3 <=30 low yes fair
>40 medium yes fair
<=30 medium yes excellent
31…40 medium no excellent
31…40 high yes fair
>40 medium no excellent
October 27, 2020 Data Mining: Concepts and Techniques 18
Output: A Decision Tree for “buys_computer”

age?

<=30 overcast
30..40 >40

student? yes credit rating?

no yes excellent fair

no yes no yes

October 27, 2020 Data Mining: Concepts and Techniques 19


Algorithm for Decision Tree Induction
 Basic algorithm (a greedy algorithm)
 Tree is constructed in a top-down recursive divide-and-conquer

manner
 At start, all the training examples are at the root

 Attributes are categorical (if continuous-valued, they are

discretized in advance)
 Examples are partitioned recursively based on selected attributes

 Test attributes are selected on the basis of a heuristic or

statistical measure (e.g., information gain)


 Conditions for stopping partitioning
 All samples for a given node belong to the same class

 There are no remaining attributes for further partitioning –

majority voting is employed for classifying the leaf


 There are no samples left

October 27, 2020 Data Mining: Concepts and Techniques 20


Attribute Selection Measure

 Information gain (ID3/C4.5)


 All attributes are assumed to be categorical

 Can be modified for continuous-valued attributes

 Gini index (IBM IntelligentMiner)


 All attributes are assumed continuous-valued

 Assume there exist several possible split values for each

attribute
 May need other tools, such as clustering, to get the

possible split values


 Can be modified for categorical attributes

October 27, 2020 Data Mining: Concepts and Techniques 21


Information Gain (ID3/C4.5)

 Select the attribute with the highest information gain


 Assume there are two classes, P and N
 Let the set of examples S contain p elements of class P
and n elements of class N
 The amount of information, needed to decide if an
arbitrary example in S belongs to P or N is defined as

p p n n
I ( p, n)   log 2  log 2
pn pn pn pn

October 27, 2020 Data Mining: Concepts and Techniques 22


Class-labeled training tuples from ALLelectronics
customer database
 RID age income student credit_rating class:buys_computer
 1 Youth high no fair no
 2 Youth high no excellent no
 3 middle_aged high no fair yes
 4 senior medium no fair yes
 5 senior low yes fair yes
 6 senior low yes excellent no
 7 middle_aged low yes excellent yes
 8 Youth medium no fair no
 9 Youth low yes fair yes
 10 senior medium yes fair yes
 11 Youth medium yes excellent yes
 12 middle_aged medium no excellent yes
 13 middle_aged high yes fair yes
 14 senior medium no excellent no
October 27, 2020 Data Mining: Concepts and Techniques 23
Information Gain in Decision
Tree Induction

 Assume that using attribute A a set S will be partitioned


into sets {S1, S2 , …, Sv}
 If Si contains pi examples of P and ni examples of N,
the entropy, or the expected information needed to
classify objects in all subtrees Si is
pi  ni
E ( A)   I ( pi , ni )
i 1 p  n

 The encoding information that would be gained by


branching on A
Gain( A)  I ( p, n)  E ( A)

October 27, 2020 Data Mining: Concepts and Techniques 24


Attribute Selection by Information
Gain Computation
5 4
E ( age)  I ( 2,3)  I ( 4,0)
 Class P: buys_computer = 14 14
“yes” 5
 I (3,2)  0.69
14
 Class N: buys_computer =
“no” Hence
 I(p, n) = I(9, 5) =0.940 Gain(age)  I ( p, n)  E (age)
 Compute the entropy for age:
Similarly
age pi ni I(pi, ni) Gain(income)  0.029
<=30 2 3 0.971
Gain( student )  0.151
30…40 4 0 0
>40 3 2 0.971 Gain(credit _ rating )  0.048

October 27, 2020 Data Mining: Concepts and Techniques 25


Gini Index (IBM IntelligentMiner)
 If a data set T contains examples from n classes, gini index,
gini(T) is defined as n 2
gini(T )  1  p j
j 1
where pj is the relative frequency of class j in T.
 If a data set T is split into two subsets T1 and T2 with sizes
N1 and N2 respectively, the gini index of the split data
contains examples from n classes, the gini index gini(T) is
defined as
N 1 gini( )  N 2 gini( )
gini split (T )  T1 T2
N N
 The attribute provides the smallest ginisplit(T) is chosen to
split the node (need to enumerate all possible splitting
points for each attribute).
October 27, 2020 Data Mining: Concepts and Techniques 26
Extracting Classification Rules from Trees

 Represent the knowledge in the form of IF-THEN rules


 One rule is created for each path from the root to a leaf
 Each attribute-value pair along a path forms a conjunction
 The leaf node holds the class prediction
 Rules are easier for humans to understand
 Example
IF age = “<=30” AND student = “no” THEN buys_computer = “no”
IF age = “<=30” AND student = “yes” THEN buys_computer = “yes”
IF age = “31…40” THEN buys_computer = “yes”
IF age = “>40” AND credit_rating = “excellent” THEN
buys_computer = “yes”
IF age = “>40” AND credit_rating = “fair” THEN buys_computer =
“no”

October 27, 2020 Data Mining: Concepts and Techniques 27


Avoid Overfitting in Classification
 The generated tree may overfit the training data
 Too many branches, some may reflect anomalies due

to noise or outliers
 Result is in poor accuracy for unseen samples

 Two approaches to avoid overfitting


 Prepruning: Halt tree construction early—do not split

a node if this would result in the goodness measure


falling below a threshold
 Difficult to choose an appropriate threshold

 Postpruning: Remove branches from a “fully grown”

tree—get a sequence of progressively pruned trees


 Use a set of data different from the training data

to decide which is the “best pruned tree”


October 27, 2020 Data Mining: Concepts and Techniques 28
Approaches to Determine the Final
Tree Size
 Separate training (2/3) and testing (1/3) sets
 Use cross validation, e.g., 10-fold cross validation
 Use all the data for training
 but apply a statistical test (e.g., chi-square) to
estimate whether expanding or pruning a node
may improve the entire distribution
 Use minimum description length (MDL) principle:
 halting growth of the tree when the encoding is
minimized
October 27, 2020 Data Mining: Concepts and Techniques 29
Enhancements to basic decision
tree induction
 Allow for continuous-valued attributes
 Dynamically define new discrete-valued attributes that

partition the continuous attribute value into a discrete


set of intervals
 Handle missing attribute values
 Assign the most common value of the attribute

 Assign probability to each of the possible values

 Attribute construction
 Create new attributes based on existing ones that are

sparsely represented
 This reduces fragmentation, repetition, and replication

October 27, 2020 Data Mining: Concepts and Techniques 30


Classification in Large Databases

 Classification—a classical problem extensively studied by


statisticians and machine learning researchers
 Scalability: Classifying data sets with millions of examples
and hundreds of attributes with reasonable speed
 Why decision tree induction in data mining?
 relatively faster learning speed (than other classification

methods)
 convertible to simple and easy to understand

classification rules
 can use SQL queries for accessing databases

 comparable classification accuracy with other methods

October 27, 2020 Data Mining: Concepts and Techniques 31


Scalable Decision Tree Induction
Methods in Data Mining Studies
 SLIQ (EDBT’96 — Mehta et al.)
 builds an index for each attribute and only class list and

the current attribute list reside in memory


 SPRINT (VLDB’96 — J. Shafer et al.)
 constructs an attribute list data structure

 PUBLIC (VLDB’98 — Rastogi & Shim)


 integrates tree splitting and tree pruning: stop growing

the tree earlier


 RainForest (VLDB’98 — Gehrke, Ramakrishnan & Ganti)
 separates the scalability aspects from the criteria that

determine the quality of the tree


 builds an AVC-list (attribute, value, class label)

October 27, 2020 Data Mining: Concepts and Techniques 32


Data Cube-Based Decision-Tree
Induction

 Integration of generalization with decision-tree induction


(Kamber et al’97).
 Classification at primitive concept levels
 E.g., precise temperature, humidity, outlook, etc.

 Low-level concepts, scattered classes, bushy

classification-trees
 Semantic interpretation problems.

 Cube-based multi-level classification


 Relevance analysis at multi-levels.

 Information-gain analysis with dimension + level.

October 27, 2020 Data Mining: Concepts and Techniques 33


Presentation of Classification Results

October 27, 2020 Data Mining: Concepts and Techniques 34


Chapter 7. Classification and
Prediction
 What is classification? What is prediction?
 Issues regarding classification and prediction
 Classification by decision tree induction
 Bayesian Classification
 Classification by backpropagation
 Classification based on concepts from association rule
mining
 Other Classification Methods
 Prediction
 Classification accuracy
 Summary

October 27, 2020 Data Mining: Concepts and Techniques 35


Bayesian Classification: Why?

 Probabilistic learning: Calculate explicit probabilities for


hypothesis, among the most practical approaches to certain
types of learning problems
 Incremental: Each training example can incrementally
increase/decrease the probability that a hypothesis is correct.
Prior knowledge can be combined with observed data.
 Probabilistic prediction: Predict multiple hypotheses, weighted
by their probabilities
 Standard: Even when Bayesian methods are computationally
intractable, they can provide a standard of optimal decision
making against which other methods can be measured

October 27, 2020 Data Mining: Concepts and Techniques 36


Bayesian Theorem

 Given training data D, posteriori probability of a


hypothesis h, P(h|D) follows the Bayes theorem
P(h | D)  P(D | h)P(h)
P(D)
 MAP (maximum posteriori) hypothesis
h  arg max P(h | D)  arg max P(D | h)P(h).
MAP hH hH
 Practical difficulty: require initial knowledge of many
probabilities, significant computational cost

October 27, 2020 Data Mining: Concepts and Techniques 37


Bayesian classification
 The classification problem may be formalized
using a-posteriori probabilities:
 P(C|X) = prob. that the sample tuple
X=<x1,…,xk> is of class C.

 E.g. P(class=N | outlook=sunny,windy=true,…)

 Idea: assign to sample X the class label C such


that P(C|X) is maximal

October 27, 2020 Data Mining: Concepts and Techniques 40


Estimating a-posteriori probabilities

 Bayes theorem:
P(C|X) = P(X|C)·P(C) / P(X)
 P(X) is constant for all classes
 P(C) = relative freq of class C samples
 C such that P(C|X) is maximum =
C such that P(X|C)·P(C) is maximum
 Problem: computing P(X|C) is unfeasible!
October 27, 2020 Data Mining: Concepts and Techniques 41
Naïve Bayesian Classification

 Naïve assumption: attribute independence


P(x1,…,xk|C) = P(x1|C)·…·P(xk|C)
 If i-th attribute is categorical:
P(xi|C) is estimated as the relative freq of
samples having value xi as i-th attribute in class C
 If i-th attribute is continuous:
P(xi|C) is estimated thru a Gaussian density
function
 Computationally easy in both cases

October 27, 2020 Data Mining: Concepts and Techniques 42


Play-tennis example: estimating P(xi|
C) outlook
P(sunny|p) = 2/9 P(sunny|n) = 3/5
Outlook Temperature Humidity W indy Class
sunny hot high false N P(overcast|p) = P(overcast|n) = 0
sunny hot high true N
overcast hot high false P 4/9
rain mild high false P
rain cool normal false P P(rain|p) = 3/9 P(rain|n) = 2/5
rain cool normal true N
overcast cool normal true P temperature
sunny mild high false N
sunny
rain
cool
mild
normal false
normal false
P
P
P(hot|p) = 2/9 P(hot|n) = 2/5
sunny
overcast
mild
mild
normal true
high true
P
P
P(mild|p) = 4/9 P(mild|n) = 2/5
overcast
rain
hot
mild
normal false
high true
P
N
P(cool|p) = 3/9 P(cool|n) = 1/5
humidity
P(high|p) = 3/9 P(high|n) = 4/5
P(p) = 9/14
P(normal|p) = P(normal|n) =
P(n) = 5/14 6/9 2/5
windy
P(true|p) = 3/9 P(true|n) = 3/5
October 27, 2020 Data Mining: Concepts and Techniques 43
Play-tennis example: classifying X
 An unseen sample X = <rain, hot, high, false>

 P(X|p)·P(p) =
P(rain|p)·P(hot|p)·P(high|p)·P(false|p)·P(p) =
3/9·2/9·3/9·6/9·9/14 = 0.010582
 P(X|n)·P(n) =
P(rain|n)·P(hot|n)·P(high|n)·P(false|n)·P(n) =
2/5·2/5·4/5·2/5·5/14 = 0.018286

 Sample X is classified in class n (don’t play)

October 27, 2020 Data Mining: Concepts and Techniques 44


The independence hypothesis…

 … makes computation possible


 … yields optimal classifiers when satisfied
 … but is seldom satisfied in practice, as attributes
(variables) are often correlated.
 Attempts to overcome this limitation:
 Bayesian networks, that combine Bayesian reasoning
with causal relationships between attributes
 Decision trees, that reason on one attribute at the
time, considering most important attributes first

October 27, 2020 Data Mining: Concepts and Techniques 45


Bayesian Belief Networks (I)
Family
Smoker
History
(FH, S) (FH, ~S)(~FH, S) (~FH, ~S)

LC 0.8 0.5 0.7 0.1


LungCancer Emphysema ~LC 0.2 0.5 0.3 0.9

The conditional probability table


for the variable LungCancer
PositiveXRay Dyspnea

Bayesian Belief Networks

October 27, 2020 Data Mining: Concepts and Techniques 46


Bayesian Belief Networks (II)
 Bayesian belief network allows a subset of the variables
conditionally independent
 A graphical model of causal relationships
 Several cases of learning Bayesian belief networks
 Given both network structure and all the variables:
easy
 Given network structure but only some variables
 When the network structure is not known in advance

October 27, 2020 Data Mining: Concepts and Techniques 47


Chapter 7. Classification and
Prediction
 What is classification? What is prediction?
 Issues regarding classification and prediction
 Classification by decision tree induction
 Bayesian Classification
 Classification by backpropagation
 Classification based on concepts from association rule
mining
 Other Classification Methods
 Prediction
 Classification accuracy
 Summary

October 27, 2020 Data Mining: Concepts and Techniques 48


Neural Networks
 Advantages
 prediction accuracy is generally high

 robust, works when training examples contain errors

 output may be discrete, real-valued, or a vector of

several discrete or real-valued attributes


 fast evaluation of the learned target function

 Criticism
 long training time

 difficult to understand the learned function (weights)

 not easy to incorporate domain knowledge

October 27, 2020 Data Mining: Concepts and Techniques 49


A Neuron
- k
x0 w0
x1 w1
 f
output y
xn wn

Input weight weighted Activation


vector x vector w sum function
 The n-dimensional input vector x is mapped into variable y by
means of the scalar product and a nonlinear function mapping

October 27, 2020 Data Mining: Concepts and Techniques 50


Network Training

 The ultimate objective of training


 obtain a set of weights that makes almost all the

tuples in the training data classified correctly


 Steps
 Initialize weights with random values

 Feed the input tuples into the network one by one

 For each unit

 Compute the net input to the unit as a linear combination


of all the inputs to the unit
 Compute the output value using the activation function
 Compute the error
 Update the weights and the bias
Multi-Layer Perceptron

Output vector
Err j  O j (1  O j ) Errk w jk
Output nodes k

 j   j  (l) Err j
wij  wij  (l ) Err j Oi
Hidden nodes Err j  O j (1  O j )(T j  O j )
wij 1
Oj  I j
1 e
Input nodes
I j   wij Oi   j
i

Input vector: xi
Chapter 7. Classification and
Prediction
 What is classification? What is prediction?
 Issues regarding classification and prediction
 Classification by decision tree induction
 Bayesian Classification
 Classification by backpropagation
 Classification based on concepts from association rule
mining
 Other Classification Methods
 Prediction
 Classification accuracy
 Summary

October 27, 2020 Data Mining: Concepts and Techniques 54


Association-Based Classification

 Several methods for association-based classification


 ARCS: Quantitative association mining and clustering

of association rules (Lent et al’97)


 It beats C4.5 in (mainly) scalability and also accuracy
 Associative classification: (Liu et al’98)
 It mines high support and high confidence rules in the form of
“cond_set => y”, where y is a class label
 CAEP (Classification by aggregating emerging patterns)
(Dong et al’99)
 Emerging patterns (EPs): the itemsets whose support
increases significantly from one class to another
 Mine Eps based on minimum support and growth rate

October 27, 2020 Data Mining: Concepts and Techniques 55


Chapter 7. Classification and
Prediction
 What is classification? What is prediction?
 Issues regarding classification and prediction
 Classification by decision tree induction
 Bayesian Classification
 Classification by backpropagation
 Classification based on concepts from association rule
mining
 Other Classification Methods
 Prediction
 Classification accuracy
 Summary

October 27, 2020 Data Mining: Concepts and Techniques 56


Other Classification Methods

 k-nearest neighbor classifier


 case-based reasoning
 Genetic algorithm
 Rough set approach
 Fuzzy set approaches

October 27, 2020 Data Mining: Concepts and Techniques 57


Instance-Based Methods
 Instance-based learning:
 Store training examples and delay the processing

(“lazy evaluation”) until a new instance must be


classified
 Typical approaches
 k-nearest neighbor approach

 Instances represented as points in a Euclidean

space.
 Locally weighted regression

 Constructs local approximation

 Case-based reasoning

 Uses symbolic representations and knowledge-

based inference
October 27, 2020 Data Mining: Concepts and Techniques 58
The k-Nearest Neighbor Algorithm
 All instances correspond to points in the n-D space.
 The nearest neighbor are defined in terms of
Euclidean distance.
 The target function could be discrete- or real- valued.
 For discrete-valued, the k-NN returns the most
common value among the k training examples nearest
to xq.
 Vonoroi diagram: the decision surface induced by 1-
NN for a typical set of training examples.
_
_
_ _ .
+
_ .
+
xq + . . .
October 27, 2020
_ + .
Data Mining: Concepts and Techniques 59
Discussion on the k-NN Algorithm
 The k-NN algorithm for continuous-valued target functions
 Calculate the mean values of the k nearest neighbors

 Distance-weighted nearest neighbor algorithm


 Weight the contribution of each of the k neighbors

according to their distance to the query point xq


giving greater weight to closer neighbors w 
 1
d ( x , x ) 2
 Similarly, for real-valued target functions q i
 Robust to noisy data by averaging k-nearest neighbors
 Curse of dimensionality: distance between neighbors could
be dominated by irrelevant attributes.
 To overcome it, axes stretch or elimination of the least

relevant attributes.
October 27, 2020 Data Mining: Concepts and Techniques 60
Case-Based Reasoning
 Also uses: lazy evaluation + analyze similar instances
 Difference: Instances are not “points in a Euclidean space”
 Example: Water faucet problem in CADET (Sycara et al’92)
 Methodology
 Instances represented by rich symbolic descriptions

(e.g., function graphs)


 Multiple retrieved cases may be combined

 Tight coupling between case retrieval, knowledge-based

reasoning, and problem solving


 Research issues
 Indexing based on syntactic similarity measure, and

when failure, backtracking, and adapting to additional


cases
October 27, 2020 Data Mining: Concepts and Techniques 61
Remarks on Lazy vs. Eager Learning
 Instance-based learning: lazy evaluation
 Decision-tree and Bayesian classification: eager evaluation
 Key differences
 Lazy method may consider query instance xq when deciding how to

generalize beyond the training data D


 Eager method cannot since they have already chosen global

approximation when seeing the query


 Efficiency: Lazy - less time training but more time predicting
 Accuracy
 Lazy method effectively uses a richer hypothesis space since it uses

many local linear functions to form its implicit global approximation


to the target function
 Eager: must commit to a single hypothesis that covers the entire

instance space
October 27, 2020 Data Mining: Concepts and Techniques 62
Genetic Algorithms
 GA: based on an analogy to biological evolution
 Each rule is represented by a string of bits
 An initial population is created consisting of randomly
generated rules
 e.g., IF A and Not A then C can be encoded as 100
1 2 2

 Based on the notion of survival of the fittest, a new


population is formed to consists of the fittest rules and
their offsprings
 The fitness of a rule is represented by its classification
accuracy on a set of training examples
 Offsprings are generated by crossover and mutation

October 27, 2020 Data Mining: Concepts and Techniques 63


Rough Set Approach
 Rough sets are used to approximately or “roughly”
define equivalent classes
 A rough set for a given class C is approximated by two
sets: a lower approximation (certain to be in C) and an
upper approximation (cannot be described as not
belonging to C)
 Finding the minimal subsets (reducts) of attributes (for
feature reduction) is NP-hard but a discernibility matrix
is used to reduce the computation intensity

October 27, 2020 Data Mining: Concepts and Techniques 64


Fuzzy Set
Approaches

 Fuzzy logic uses truth values between 0.0 and 1.0 to


represent the degree of membership (such as using fuzzy
membership graph)
 Attribute values are converted to fuzzy values
 e.g., income is mapped into the discrete categories

{low, medium, high} with fuzzy values calculated


 For a given new sample, more than one fuzzy value may
apply
 Each applicable rule contributes a vote for membership in
the categories
 Typically, the truth values for each predicted category are
summed
October 27, 2020 Data Mining: Concepts and Techniques 65
Chapter 7. Classification and
Prediction
 What is classification? What is prediction?
 Issues regarding classification and prediction
 Classification by decision tree induction
 Bayesian Classification
 Classification by backpropagation
 Classification based on concepts from association rule
mining
 Other Classification Methods
 Prediction
 Classification accuracy
 Summary

October 27, 2020 Data Mining: Concepts and Techniques 66


What Is Prediction?

 Prediction is similar to classification


 First, construct a model
 Second, use model to predict unknown value
 Major method for prediction is regression
 Linear and multiple regression
 Non-linear regression
 Prediction is different from classification
 Classification refers to predict categorical class label
 Prediction models continuous-valued functions

October 27, 2020 Data Mining: Concepts and Techniques 67


Predictive Modeling in Databases
 Predictive modeling: Predict data values or construct
generalized linear models based on the database data.
 One can only predict value ranges or category distributions
 Method outline:
 Minimal generalization
 Attribute relevance analysis
 Generalized linear model construction
 Prediction
 Determine the major factors which influence the prediction
 Data relevance analysis: uncertainty measurement,

entropy analysis, expert judgement, etc.


 Multi-level prediction: drill-down and roll-up analysis
October 27, 2020 Data Mining: Concepts and Techniques 68
Regress Analysis and Log-Linear
Models in Prediction
 Linear regression: Y =  +  X
 Two parameters ,  and  specify the line and are to

be estimated by using the data at hand.


 using the least squares criterion to the known values

of Y1, Y2, …, X1, X2, ….


 Multiple regression: Y = b0 + b1 X1 + b2 X2.
 Many nonlinear functions can be transformed into the

above.
 Log-linear models:
 The multi-way table of joint probabilities is

approximated by a product of lower-order tables.


 Probability: p(a, b, c, d) = ab acad bcd

October 27, 2020 Data Mining: Concepts and Techniques 69


Locally Weighted Regression
 Construct an explicit approximation to f over a local region
surrounding query instance xq.
 Locally weighted linear regression:
 The target function f is approximated near xq using the

linear function: f ( x)  w  w a ( x) wnan ( x)


0 11
 minimize the squared error: distance-decreasing weight K

E ( xq )  1  ( f ( x)  f ( x))2 K (d ( xq , x))
2 xk _nearest _neighbors_of _ x
 the gradient descent training rule: q

 w  
In most cases,
j  K (d (
the target function is approximated by axq , x ))(( f ( x )   ( x))a ( x)
f j
x k _ nearest _ neighbors_ of _ xq
constant, linear, or quadratic function.

October 27, 2020 Data Mining: Concepts and Techniques 70


Prediction: Numerical Data

October 27, 2020 Data Mining: Concepts and Techniques 71


Prediction: Categorical Data

October 27, 2020 Data Mining: Concepts and Techniques 72


Chapter 7. Classification and
Prediction
 What is classification? What is prediction?
 Issues regarding classification and prediction
 Classification by decision tree induction
 Bayesian Classification
 Classification by backpropagation
 Classification based on concepts from association rule
mining
 Other Classification Methods
 Prediction
 Classification accuracy
 Summary

October 27, 2020 Data Mining: Concepts and Techniques 73


Classification Accuracy: Estimating Error
Rates
 Partition: Training-and-testing
 use two independent data sets, e.g., training set
(2/3), test set(1/3)
 used for data set with large number of samples
 Cross-validation
 divide the data set into k subsamples
 use k-1 subsamples as training data and one sub-
sample as test data --- k-fold cross-validation
 for data set with moderate size
 Bootstrapping (leave-one-out)
 for small size data
October 27, 2020 Data Mining: Concepts and Techniques 74
Boosting and Bagging

 Boosting increases classification accuracy


 Applicable to decision trees or Bayesian

classifier
 Learn a series of classifiers, where each
classifier in the series pays more attention to
the examples misclassified by its predecessor
 Boosting requires only linear time and
constant space

October 27, 2020 Data Mining: Concepts and Techniques 75


Boosting Technique (II) — Algorithm

 Assign every example an equal weight 1/N


 For t = 1, 2, …, T Do
 Obtain a hypothesis (classifier) h(t) under w(t)

 Calculate the error of h(t) and re-weight the

examples based on the error


 Normalize w(t+1) to sum to 1

 Output a weighted sum of all the hypothesis,


with each hypothesis weighted according to its
accuracy on the training set
October 27, 2020 Data Mining: Concepts and Techniques 76
Chapter 7. Classification and
Prediction
 What is classification? What is prediction?
 Issues regarding classification and prediction
 Classification by decision tree induction
 Bayesian Classification
 Classification by backpropagation
 Classification based on concepts from association rule
mining
 Other Classification Methods
 Prediction
 Classification accuracy
 Summary

October 27, 2020 Data Mining: Concepts and Techniques 77


Summary

 Classification is an extensively studied problem (mainly in


statistics, machine learning & neural networks)
 Classification is probably one of the most widely used
data mining techniques with a lot of extensions
 Scalability is still an important issue for database
applications: thus combining classification with database
techniques should be a promising topic
 Research directions: classification of non-relational data,
e.g., text, spatial, multimedia, etc..

October 27, 2020 Data Mining: Concepts and Techniques 78


References (I)
 C. Apte and S. Weiss. Data mining with decision trees and decision rules. Future
Generation Computer Systems, 13, 1997.
 L. Breiman, J. Friedman, R. Olshen, and C. Stone. Classification and Regression Trees.
Wadsworth International Group, 1984.
 P. K. Chan and S. J. Stolfo. Learning arbiter and combiner trees from partitioned data for
scaling machine learning. In Proc. 1st Int. Conf. Knowledge Discovery and Data Mining
(KDD'95), pages 39-44, Montreal, Canada, August 1995.
 U. M. Fayyad. Branching on attribute values in decision tree generation. In Proc. 1994 AAAI
Conf., pages 601-606, AAAI Press, 1994.
 J. Gehrke, R. Ramakrishnan, and V. Ganti. Rainforest: A framework for fast decision tree
construction of large datasets. In Proc. 1998 Int. Conf. Very Large Data Bases, pages 416-
427, New York, NY, August 1998.
 M. Kamber, L. Winstone, W. Gong, S. Cheng, and J. Han. Generalization and decision tree
induction: Efficient classification in data mining. In Proc. 1997 Int. Workshop Research
Issues on Data Engineering (RIDE'97), pages 111-120, Birmingham, England, April 1997.

October 27, 2020 Data Mining: Concepts and Techniques 79


References (II)
 J. Magidson. The Chaid approach to segmentation modeling: Chi-squared automatic
interaction detection. In R. P. Bagozzi, editor, Advanced Methods of Marketing Research,
pages 118-159. Blackwell Business, Cambridge Massechusetts, 1994.
 M. Mehta, R. Agrawal, and J. Rissanen. SLIQ : A fast scalable classifier for data mining.
In Proc. 1996 Int. Conf. Extending Database Technology (EDBT'96), Avignon, France,
March 1996.
 S. K. Murthy, Automatic Construction of Decision Trees from Data: A Multi-Diciplinary
Survey, Data Mining and Knowledge Discovery 2(4): 345-389, 1998
 J. R. Quinlan. Bagging, boosting, and c4.5. In Proc. 13th Natl. Conf. on Artificial
Intelligence (AAAI'96), 725-730, Portland, OR, Aug. 1996.
 R. Rastogi and K. Shim. Public: A decision tree classifer that integrates building and
pruning. In Proc. 1998 Int. Conf. Very Large Data Bases, 404-415, New York, NY, August
1998.
 J. Shafer, R. Agrawal, and M. Mehta. SPRINT : A scalable parallel classifier for data
mining. In Proc. 1996 Int. Conf. Very Large Data Bases, 544-555, Bombay, India, Sept.
1996.
 S. M. Weiss and C. A. Kulikowski. Computer Systems that Learn: Classification and
Prediction Methods from Statistics, Neural Nets, Machine Learning, and Expert Systems.
Morgan Kaufman, 1991.

October 27, 2020 Data Mining: Concepts and Techniques 80


http://www.cs.sfu.ca/~han

Thank you !!!


October 27, 2020 Data Mining: Concepts and Techniques 81

You might also like