Introduction To Data Mining
Introduction To Data Mining
P A N G . NI N G T A N
M i c h i g a nS t a t eU n i v e r s i t y
MICHAELSTEINBACH
U n i v e r s i toyf M i n n e s o t a
V I P I N K UM A R
U n i v e r s i toy f M i n n e s o t a
a n d A r m yH i g h P e r f o r m a n c e
ComputinR g e s e a r cC henter
+f.f_l crf.rfh. .W if f
aqtY6l$
t.T.R.C.
i'&'ufe61ttt1/.
Y \ t.\ $t,/,1'
'48!
n,5 \. 7\ V
Access the latest information about Addison-Wesley titles from our iWorld Wide Web site:
http ://www. aw-bc.com/computing
Many of the designations used by manufacturers and sellers to distiriguish their products
are claimed as trademarks. where those designations appear in this book, and Addison-
Wesley was aware of a trademark claim, the designations have been printed in initial caps
or all caps.
The programs and applications presentedin this book have been incl,[rdedfor their
instructional value. They have been tested with care, but are not guatanteedfor any
particular purpose. The publisher does not offer any warranties or representations,nor does
it accept any liabilities with respect to the programs or applications.
For information on obtaining permission for use of material in this work, please submit a
written request to PearsonEducation, Inc., Rights and Contract Department, 75 Arlington
Street,Suite 300, Boston, MA02II6 or fax your requestto (617) g4g-j047.
All rights reserved. No part of this publication may be reproduced, stored in a retrieval
system, or transmitted, in any form or by any means, electronic, mechanical, photocopying,
recording, or any other media embodiments now known or hereafter to become known,
without the prior written permission of the publisher. printed in the united Statesof
America.
lsBN0-321-42052-7
2 3 4 5 67 8 9 10-HAM-O807 06
our famili,es
Preface
Advances in data generation and collection are producing data sets of mas-
sive size in commerce and a variety of scientific disciplines. Data warehouses
store details of the salesand operations of businesses,Earth-orbiting satellites
beam high-resolution images and sensordata back to Earth, and genomicsex-
periments generate sequence,structural, and functional data for an increasing
number of organisms. The ease with which data can now be gathered and
stored has created a new attitude toward data analysis: Gather whatever data
you can whenever and wherever possible. It has become an article of faith
that the gathered data will have value, either for the purpose that initially
motivated its collection or for purposes not yet envisioned.
The field of data mining grew out of the limitations of current data anal-
ysis techniques in handling the challengesposedlby these new types of data
sets. Data mining does not replace other areas of data analysis, but rather
takes them as the foundation for much of its work. While some areas of data
mining, such as association analysis, are unique to the field, other areas, such
as clustering, classification, and anomaly detection, build upon a long history
of work on these topics in other fields. Indeed, the willingness of data mining
researchersto draw upon existing techniques has contributed to the strength
and breadth of the field, as well as to its rapid growth.
Another strength of the field has been its emphasis on collaboration with
researchers in other areas. The challenges of analyzing new types of data
cannot be met by simply applying data analysis techniques in isolation from
those who understand the data and the domain in which it resides. Often, skill
in building multidisciplinary teams has been as responsiblefor the successof
data mining projects as the creation of new and innovative algorithms. Just
as, historically, many developments in statistics were driven by the needs of
agriculture, industry, medicine, and business, rxrany of the developments in
data mining are being driven by the needs of those same fields.
This book began as a set of notes and lecture slides for a data mining
course that has been offered at the University of Minnesota since Spring 1998
to upper-division undergraduate and graduate Students. Presentation slides
viii Preface
and exercisesdeveloped in these offerings grew with time and served as a basis
for the book. A survey of clustering techniques in data mining, originally
written in preparation for research in the area, served as a starting point
for one of the chapters in the book. Over time, the clustering chapter was
joined by chapters on data, classification, association analysis, and anomaly
detection. The book in its current form has been class tested at the home
institutions of the authors-the University of Minnesota and Michigan State
University-as well as several other universities.
A number of data mining books appeared in the meantime, but were not
completely satisfactory for our students primarily graduate and undergrad-
uate students in computer science,but including students from industry and
a wide variety of other disciplines. Their mathematical and computer back-
grounds varied considerably, but they shared a common goal: to learn about
data mining as directly as possible in order to quickly apply it to problems
in their own domains. Thus, texts with extensive mathematical or statistical
prerequisiteswere unappealing to many of them, as were texts that required a
substantial database background. The book that evolved in responseto these
students needsfocusesas directly as possible on the key conceptsof data min-
ing by illustrating them with examples, simple descriptions of key algorithms,
and exercises.
To help the readers better understand the concepts that have been pre-
sented, we provide an extensive set of examples, figures, and exercises. Bib-
Iiographic notes are included at the end of each chapter for readers who are
interested in more advanced topics, historically important papers, and recent
trends. The book also contains a comprehensivesubject and author index.
Janardan, Rong Jin, George Karypis, Haesun Park, William F. Punch, Shashi
Shekhar, and Jaideep Srivastava. The collaborators on our many data mining
projects, who also have our gratitude, include Ramesh Agrawal, Steve Can-
non, Piet C. de Groen, FYan Hill, Yongdae Kim, Steve Klooster, Kerry Long,
Nihar Mahapatra, Chris Potter, Jonathan Shapiro, Kevin Silverstein, Nevin
Young, and Zhi-Li Zhang.
The departments of Computer Scienceand Engineering at the University of
Minnesota and Michigan State University provided computing resourcesand
a supportive environment for this project. ARDA, ARL, ARO, DOE, NASA,
and NSF provided research support for Pang-Ning Tan, Michael Steinbach,
and Vipin Kumar. In particular, Kamal Abdali, Dick Brackney, Jagdish Chan-
dra, Joe Coughlan, Michael Coyle, Stephen Davis, Flederica Darema, Richard
Hirsch, Chandrika Kamath, Raju Namburu, N. Radhakrishnan, James Sido-
ran, Bhavani Thuraisingham, Walt Tiernin, Maria Zemankova, and Xiaodong
Zhanghave been supportive of our research in data mining and high-performance
computing.
It was a pleasure working with the helpful staff at Pearson Education. In
particular, we would like to thank Michelle Brown, Matt Goldstein, Katherine
Harutunian, Marilyn Lloyd, Kathy Smith, and Joyce Wells. We would also
like to thank George Nichols, who helped with the art work and Paul Anag-
nostopoulos, who provided I4.T[X support. We are grateful to the following
Pearson reviewers: Chien-Chung Chan (University of Akron), Zhengxin Chen
(University of Nebraska at Omaha), Chris Clifton (Purdue University), Joy-
deep Ghosh (University of Texas, Austin), Nazli Goharian (Illinois Institute
of Technology), J. Michael Hardin (University of Alabama), James Hearne
(Western Washington University), Hillol Kargupta (University of Maryland,
Baltimore County and Agnik, LLC), Eamonn Keogh (University of California-
Riverside), Bing Liu (University of Illinois at Chicago), Mariofanna Milanova
(University of Arkansas at Little Rock), Srinivasan Parthasarathy (Ohio State
University), Zbigniew W. Ras (University of North Carolina at Charlotte),
Xintao Wu (University of North Carolina at Charlotte), and Mohammed J.
Zaki (RensselaerPolvtechnic Institute).
Gontents
Preface vll
Introduction 1
1.1 What Is Data Mining? 2
7.2 Motivating Challenges 4
1.3 The Origins of Data Mining 6
1.4 Data Mining Tasks 7
1.5 Scope and Organization of the Book 11
1.6 Bibliographic Notes 13
t.7 Exercises 16
Data 19
2.I Types of Data 22
2.1.I Attributes and Measurement 23
2.L.2 Types of Data Sets . 29
2.2 Data Quality 36
2.2.I Measurement and Data Collection Issues 37
2.2.2 IssuesRelated to Applications 43
2.3 Data Preprocessing 44
2.3.L Aggregation 45
2.3.2 Sampling 47
2.3.3 Dimensionality Reduction 50
2.3.4 Feature Subset Selection 52
2.3.5 Feature Creation 55
2.3.6 Discretization and Binarization 57
2.3:7 Variable Tlansformation . 63
2.4 Measuresof Similarity and Dissimilarity . . . 65
2.4.L Basics 66
2.4.2 Similarity and Dissimilarity between Simple Attributes . 67
2.4.3 Dissimilarities between Data Objects . 69
2.4.4 Similarities between Data Objects 72
xiv Contents
Exploring Data 97
3.i The Iris Data Set 98
3.2 Summary Statistics 98
3.2.L Frequenciesand the Mode 99
3.2.2 Percentiles 100
3.2.3 Measuresof Location: Mean and Median 101
3.2.4 Measuresof Spread: Range and Variance 102
3.2.5 Multivariate Summary Statistics 704
3.2.6 Other Ways to Summarize the Data 105
3.3 Visualization 105
3.3.1 Motivations for Visualization 105
3.3.2 General Concepts 106
3.3.3 Techniques 110
3.3.4 Visualizing Higher-Dimensional Data . 724
3.3.5 Do's and Don'ts 130
3.4 OLAP and Multidimensional Data Analysis 131
3.4.I Representing Iris Data as a Multidimensional Array 131
3.4.2 Multidimensional Data: The General Case . 133
3.4.3 Analyzing Multidimensional Data 135
3.4.4 Final Comments on Multidimensional Data Analysis 139
3 . 5 Bibliographic Notes 139
3 . 6 Exercises 747
Classification:
Basic Concepts, Decision Tlees, and Model Evaluation L45
4.1 Preliminaries 746
4.2 General Approach to Solving a Classification Problem 748
4.3 Decision Tlee Induction 150
4.3.1 How a Decision Tlee Works 150
4.3.2 How to Build a Decision TYee 151
4.3.3 Methods for Expressing Attribute Test Conditions 155
4.3.4 Measuresfor Selecting the Best Split . 158
4.3.5 Algorithm for Decision Tlee Induction 164
4.3.6 An Examole: Web Robot Detection 166
Contents xv
9.1.3 ClusterCharacteristics
. . 573
9.L.4 GeneralCharacteristicsof ClusteringAlgorithms 575
9.2 Prototype-BasedClustering 577
9.2.1 F\zzy Clustering 577
9.2.2 ClusteringUsing Mixture Models 583
9.2.3 Self-OrganizingMaps (SOM) 594
9.3 Density-BasedClustering 600
9.3.1 Grid-BasedClustering 601
9.3.2 SubspaceClustering 604
9.3.3 DENCLUE: A Kernel-BasedSchemefor Density-Based
Clustering 608
9.4 Graph-BasedClustering 612
9.4.1 Sparsification 613
9.4.2 Minimum SpanningTlee (MST) Clustering . . . 674
9.4.3 OPOSSUM:Optimal Partitioning of SparseSimilarities
UsingMETIS 616
9.4.4 Chameleon: Hierarchical Clustering with Dynamic
Modeling 616
9.4.5 Shared Nearest Neighbor Similarity 622
9.4.6 The Jarvis-Patrick Clustering Algorithm 625
9.4.7 SNN Density 627
9.4.8 SNN Density-Based Clustering 629
9.5 Scalable Clustering Algorithms 630
9.5.1 Scalability: General Issuesand Approaches 630
9.5.2 BIRCH 633
9.5.3 CURE 635
9.6 Which Clustering Algorithm? 639
9.7 Bibliographic Notes 643
9.8 Exercises 647
Introduction
Rapid advances in data collection and storage technology have enabled or-
ganizations to accumulate vast amounts of data. However, extracting useful
information has proven extremely challenging. Often, traditional data analy-
sis tools and techniques cannot be used because of the massive size of a data
set. Sometimes,the non-traditional nature of the data means that traditional
approachescannot be applied even if the data set is relatively small. In other
situations, the questions that need to be answeredcannot be addressedusing
existing data analysis techniques, and thus, new methods need to be devel-
oped.
Data mining is a technology that blends traditional data analysis methods
with sophisticated algorithms for processinglarge volumes of data. It has also
opened up exciting opportunities for exploring and analyzing new types of
data and for analyzing old types of data in new ways. In this introductory
chapter, we present an overview of data mining and outline the key topics
to be covered in this book. We start with a description of some well-known
applications that require new techniques for data analysis.
answer important business questions such as "Who are the most profitable
customers?" "What products can be cross-soldor up-sold?" and "What is the
revenue outlook of the company for next year?)) Some of these questions mo-
tivated the creation of association analvsis (Chapters 6 and 7), a new data
analysis technique.
Information
Figure1,1. Theprocess
of knowledge
discovery
indatabases
(KDD).
The input data can be stored in a variety of formats (flat files, spread-
sheets, or relational tables) and may reside in a centralized data repository
or be distributed across multiple sites. The purpose of preprocessing is
to transform the raw input data into an appropriate format for subsequent
analysis. The steps involved in data preprocessinginclude fusing data from
multiple sources, cleaning data to remove noise and duplicate observations,
and selecting records and features that are relevant to the data mining task
at hand. Because of the many ways data can be collected and stored, data
4 Chapter 1 Introduction
experiment and often represent opportunistic samplesof the data, rather than
random samples. Also, the data sets frequently involve non-traditional types
of data and data distributions.
1.2.Datamining
Figure ofmany
asa conlluence disciplines.
L.4 Data Mining Tasks 7
Figure 1.3 illustrates four of the core data mining tasks that are described
in the remainder of this book.
Figure
1.3.Fourofthecoredatamining
tasks.
8 Chapter 1 Introduction
Predictive modeling refers to the task of building a model for the target
variable as a function of the explanatory variables. There are two types of
predictive modeling tasks: classification, which is used for discrete target
variables, and regression, which is used for continuous target variables. For
example, predicting whether a Web user will make a purchase at an online
bookstore is a classification task becausethe target variable is binary-valued.
On the other hand, forecasting the future price of a stock is a regression task
becauseprice is a continuous-valued attribute. The goal of both tasks is to
learn a model that minimizes the error between the predicted and true values
of the target variable. Predictive modeling can be used to identify customers
that will respond to a marketing campaign, predict disturbances in the Earth's
ecosystem,or judge whether a patient has a particular diseasebased on the
results of medical tests.
Example 1.1 (Predicting the Type of a Flower). Consider the task of
predicting a species of flower based on the characteristics of the flower. In
particular, consider classifying an Iris flower as to whether it belongs to one
of the following three Iris species: Setosa, Versicolour, or Virginica. To per-
form this task, we need a data set containing the characteristics of various
flowers of these three species. A data set with this type of information is
the well-known Iris data set from the UCI Machine Learning Repository at
http: /hrurw.ics.uci.edu/-mlearn. In addition to the speciesof a flower,
this data set contains four other attributes: sepal width, sepal length, petal
length, and petal width. (The Iris data set and its attributes are described
further in Section 3.1.) Figure 1.4 shows a plot of petal width versus petal
length for the 150 flowers in the Iris data set. Petal width is broken into the
categories low, med'ium, and hi'gh,which correspond to the intervals [0' 0.75),
[0.75,1.75), [1.75,oo), respectively.Also, petal length is broken into categories
low, med,'ium,and hi,gh,which correspondto the intervals [0' 2.5), [2.5,5), [5'
oo), respectively. Based on these categories of petal width and length, the
following rules can be derived:
While these rules do not classify all the flowers, they do a good (but not
perfect) job of classifying most of the flowers. Note that flowers from the
Setosa speciesare well separated from the Versicolour and Virginica species
with respect to petal width and length, but the latter two species overlap
somewhat with respect to these attributes. I
L.4 Data Mining Tasks I
l----a--fo-------i
r Setosa la o r
. Versicolour ,ftfo oto ai
: o o o I
o Virginica I
' t0f0a o
0?oo r a
arf I
?1.75 !0_l! _.! o__o
E
() .t
aa rO
r 1.5 . .4. a?o
E o aaaa
= a aaaaaaa
(t' aaaa a
() aa
(Ll aaa aa
I
I
lltll
llt I
I lllltt I
It!
122.5345(
PetalLength(cm)
Figure1.4. Petalwidthversus
petallength
for150lrisflowers,
than observations that belong to other clusters. Clustering has been used to
group sets of related customers, find areas of the ocean that have a significant
impact on the Earth's climate, and compressdata.
Example 1.3 (Document Clustering). The collection of news articles
shown in Table 1.2 can be grouped based on their respective topics. Each
article is representedas a set of word-frequency pairs (r, where tu is a word
"),
and c is the number of times the word appears in the article. There are two
natural clusters in the data set. The first cluster consists of the first four ar-
ticles, which correspond to news about the economy,while the second cluster
contains the last four articles, which correspond to news about health care. A
good clustering algorithm should be able to identify these two clusters based
on the similarity between words that appear in the articles.
of newsarticles.
Table1.2.Collection
Article Words
I dollar: 1, industry: 4, country: 2, loan: 3, deal: 2, government: 2
2 machinery: 2, labor: 3, market: 4, industry: 2, work: 3, country: 1
.) job: 5, inflation: 3, rise: 2, jobless: 2, market: 3, country: 2, index: 3
A domestic: 3, forecast: 2, gain: 1, market: 2, sale: 3, price: 2
r
J patient: 4, symptom: 2, drug: 3, health: 2, clinic: 2, doctor: 2
o pharmaceutical:2, company: 3, drug: 2,vaccine:1, flu: 3
7 death: 2, cancer: 4, drug: 3, public: 4, health: 3, director: 2
8 medical: 2, cost: 3, increase: 2, patient: 2, health: 3, care: 1
1.5 Scope and Organization of the Book 11
fiers. The multiclass and imbalanced class problems are also discussed.These
topics can be covered independently.
Association analysis is explored in Chapters 6 and 7. Chapter 6 describes
the basics of association analysis: frequent itemsets, association rules, and
some of the algorithms used to generate them. Specific types of frequent
itemsets-maximal, closed,and hyperclique-that are important for data min-
ing are also discussed,and the chapter concludeswith a discussionof evalua-
tion measuresfor association analysis. Chapter 7 considers a variety of more
advancedtopics, including how association analysis can be applied to categor-
ical and continuous data or to data that has a concept hierarchy. (A concept
hierarchy is a hierarchical categorization of objects, e.g., store items, clothing,
shoes,sneakers.) This chapter also describeshow association analysis can be
extended to find sequential patterns (patterns involving order), patterns in
graphs, and negative relationships (if one item is present, then the other is
not).
Cluster analysis is discussedin Chapters 8 and 9. Chapter 8 first describes
the different types of clusters and then presentsthree specific clustering tech-
niques: K-means, agglomerative hierarchical clustering, and DBSCAN. This
is followed by a discussion of techniques for validating the results of a cluster-
ing algorithm. Additional clustering concepts and techniques are explored in
Chapter 9, including fiszzy and probabilistic clustering, Self-Organizing Maps
(SOM), graph-based clustering, and density-basedclustering. There is also a
discussion of scalability issues and factors to consider when selecting a clus-
tering algorithm.
The last chapter, Chapter 10, is on anomaly detection. After some basic
definitions, several different types of anomaly detection are considered: sta-
tistical, distance-based,density-based, and clustering-based. Appendices A
through E give a brief review of important topics that are used in portions of
the book: linear algebra, dimensionality reduction, statistics, regression,and
optimization.
The subject of data mining, while relatively young compared to statistics
or machine learning, is already too large to cover in a single book. Selected
references to topics that are only briefly covered, such as data quality' are
provided in the bibliographic notes of the appropriate chapter. Referencesto
topics not covered in this book, such as data mining for streams and privacy-
preserving data mining, are provided in the bibliographic notes of this chapter.
1_.6 Bibliographic Notes 13
Glymour et al. 116]consider the lessons that statistics may have for data
mining. Smyth et aL [38] describe how the evolution of data mining is being
driven by new types of data and applications, such as those involving streams,
graphs, and text. Emerging applications in data mining are consideredby Han
et al. [20] and Smyth [37] describessome researchchallengesin data mining.
A discussionof how developmentsin data mining researchcan be turned into
practical tools is given by Wu et al. [43]. Data mining standards are the
subject of a paper by Grossman et al. [17]. Bradley [3] discusseshow data
mining algorithms can be scaled to large data sets.
With the emergence of new data mining applications have come new chal-
lengesthat need to be addressed.For instance, concernsabout privacy breaches
as a result of data mining have escalated in recent years, particularly in ap-
plication domains such as Web commerce and health care. As a result, there
is growing interest in developing data mining algorithms that maintain user
privacy. Developing techniques for mining encrypted or randomized data is
known as privacy-preserving data mining. Some general referencesin this
area include papers by Agrawal and Srikant l1], Clifton et al. [7] and Kargupta
et al. [27]. Vassilios et al. [39] provide a survey.
Recent years have witnessed a growing number of applications that rapidly
generatecontinuous streams of data. Examples of stream data include network
traffic, multimedia streams, and stock prices. Severalissuesmust be considered
when mining data streams, such as the limited amount of memory available,
the need for online analysis, and the change of the data over time. Data
mining for stream data has become an important area in data mining. Some
selected publications are Domingos and Hulten [8] (classification), Giannella
et al. [15] (associationanalysis), Guha et al. [19] (clustering), Kifer et al. [28]
(changedetection), Papadimitriou et al. [32] (time series),and Law et al. [30]
(dimensionality reduction).
Bibliography
[1] R. Agrawal and R. Srikant. Privacy-preserving data mining. ln Proc. of 2000 ACM-
SIGMOD IntI. Conf. on Management of Data, pages 439-450, Dallas, Texas, 2000.
ACM Press.
12lM. J. A. Berry and G. Linofi. Data Mtni,ng Technr,ques: For Marketing, Sales, and'
Customer Relati,onship Management. Wiley Computer Publishing, 2nd edition, 2004.
[')] S. Bradley, J. Gehrke, R. Ramakrishnan, and R. Srikant. Scaling mining algorithms
lol
P.
to large databases. Communicati,ons of the ACM, 45(8):38 43,2002.
[4] S. Chakrabarti. Mini.ng the Web: Di.scoueri.ngKnouledge from Hypertert Data' Morgan
Kaufmann, San Flancisco, CA, 2003.
Bibliography 15
[5] M.-s. chen, J. Han, and P. s. Yu. Data Mining: An overview from a Database
Perspective. IEEE Transact'ions on Knowled,ge abd Data Engineering, g(6):g66-gg3,
1996.
[6] v. cherkassky and F. Mulier. Learn'ing from Data: concepts, Theory, and, Method,s.
Wiley Interscience, 1g98.
[7] c. clifton, M. Kantarcioglu, and J. vaidya. Defining privacy for data mining. In
National Sc'ience Foundat'ion workshop on Nert Generation Data Mining, pages 126-
133, Baltimore, MD, November 2002.
f8] P' Domingos and G. Hulten. Mining high-speed data streams. In Proc. of the 6th Intt.
conf. on Knowled,ge Discouery and Data M'in'ing, pages z1-80, Boston, Massachusetts,
2000. ACM Press.
J9] R. o' Duda, P. E. Hart, and D. G. stork. Pattern classification John wiley & sons,
Inc., New York, 2nd edition, 2001.
[10] M. H. Dunham. Data Mini,ng: Introd,uctory and, Ad,uancedropics. prentice Hall, 2002.
f11] U. M. Fayyad, G. G. Grinstein, and A. Wierse, editors. Information Visualization in
Data Mining and, Knowled,ge Discouery. Morgan Kaufmann Publishers, San Ftancisco,
CA, September 200I.
112] u. M. Fayyad, G. Piatetsky-Shapiro, and P. smyth. Fyom Data Mining to Knowledge
Discovery: An overview. rn Ad,aances in Knowledge Discouery and Data M'ining, pages
1-34. AAAI Press, 1996.
[13] u. M. Fayyad, G. Piatetsky-shapiro, P. Smyth, and R. uthurusamy, editors. Aduances
'in Knowled,ge Discouery and Data Mini.ng. press, 1g96.
AAAI/MIT
[14] J. H. Friedman. Data Mining and Statistics: What's the Connection? Unpublished.
www-stat.stanford.edu/-jhf/ftp/dm-stat.ps, 1992.
[15] c. Giannella, J. Han, J. Pei, X. Yan, and P. s. Yu. Mining Fyequent patterns in Data
streams at Multiple Time Granularities. In H. Kargupta, A. Joshi, K. sivakumar, and
Y. Yesha, editors, Nert Generation Data M,ining, pages ISI-2I2. AAAI/MIT,2003.
116] c. Glymour, D. Madigan, D. Pregibon, and P. smyth. statistical rhemes and Lessons
for Data Mining. Data Mining and Knowleilge D,iscouerg,1(1):11-28, 1992.
[17] R. L. Grossman, M. F. Hornick, and G. Meyer. Data mining standards initiatives.
c omtnunications of the A c M, 45(g) :59-6I, 2002.
[18] R. L. Grossman, c. Kamath, P. Kegelmeyer, v. Kumar, and R. Namburu, editors. Data
Mini;ng for Sci,entific and Engineering Applicati,ons. Kluwer Academic Publishers, 2001.
119] s. Guha, A. Meyerson, N. Mishra, R. Motwani, and L. o'callaghan. clustering Data
Streams: Theory and Practice. IEEE Tbansact'ions on Knowledge and, Data Engineering,
15(3) :515-528, May/June 2003.
[20] J. Han, R. B. Altman, V. Kumar, H. Mannila, and D. pregibon. Emerging scientific
applications in data mining. Communications of the ACM, 4S(8):54-b8,2002.
[21] J. Han and M. Kamber. Data Mining: concepts and, Techniques. Morgan Kaufmann
Publishers, San Francisco, 2001.
[22] D. J. Hand. Data Mining: Statistics and More? The American Statistician, 52(2):
112-118,1998.
[23] D. J. Hand, H. Mannila, and P. smyth. Principles of Data Mining. MIT press, 2001.
l24l T. Hastie, R. Tibshirani, and J. H. Fliedman. The Elements of Stati.stical Learning:
Data Mini,ng, Inference, Pred,iction. Springer, New York, 2001.
[25] M. Kantardzic. Data Mini,ng: concepts, Mod,el.s,Method,s, and Algorithms. wiley-IEEE
Press, Piscataway NJ, 2003.
16 Chapter I Introduction
L.7 Exercises
1. Discuss whether or not each of the following activities is a data mining task.
L.7 Exercises L7
2 . Suppose that you are employed as a data mining consultant for an Internet
search engine company. Describe how data mining can help the company by
giving specific examples of how techniques, such as clustering, classification,
association rule mining, and anomaly detection can be applied.
J. For each of the following data sets, explain whether or not data privacy is an
important issue.
The Type of Data Data sets differ in a number of ways. For example, the
attributes used to describedata objects can be of different types-quantitative
or qualitative-and data sets may have special characteristics;e.g., some data
sets contain time series or objects with explicit relationships to one another.
Not surprisingly, the type of data determines which tools and techniques can
be used to analyze the data. F\rrthermore, new research in data mining is
often driven by the need to accommodate new application areas and their new
types of data.
The Quality of the Data Data is often far from perfect. while most data
mining techniques can tolerate some level of imperfection in the data, a focus
on understanding and improving data quality typically improves the quality
of the resulting analysis. Data quality issuesthat often need to be addressed
include the presenceof noise and outliers; missing, inconsistent, or duplicate
data; and data that is biased or, in some other way, unrepresentative of the
phenomenon or population that the data is supposed to describe.
Preprocessing Steps to Make the Data More suitable for Data Min-
ing often, the raw data must be processedin order to make it suitable for
analysis. While one objective may be to improve data quality, other goals
focus on modifying the data so that it better fits a specifieddata mining tech-
nique or tool. For example, a continuous attribute, e.g., length, m&y need to
be transformed into an attribute with discrete categories,e.g., short, med,ium,
or long, in order to apply a particular technique. As another example, the
20 Chapter 2 Data
Hi,
I've attached the data file that I mentioned in my previous email.
Each line contains the information for a single patient and consists
of five fields. We want to predict the last field using the other fields.
I don't have time to provide any more information about the data
since I'm going out of town for a couple of days, but hopefully that
won't slow you down too much. And if you don't mind, could we
meet when I get back to discussyour preliminary results? I might
invite a few other members of mv team.
Despite some misgivings, you proceed to analyze the data. The first few
rows of the fiIe are as follows:
A brieflook at the data revealsnothing strange. You put your doubts aside
and start the analysis. There are only 1000 lines, a smaller data file than you
had hoped for, but two days later, you feel that you have made some progress.
You arrive for the meeting, and while waiting for others to arrive, you strike
2L
Statistician: So, you got the data for all the patients?
Data Miner: Yes. I haven't had much time for analysis, but I
do have a few interesting results.
Statistician: Amazing. There were so many data issueswith
this set of patients that I couldn't do much.
Data Miner: Oh? I didn't hear about any possible problems.
Statistician: Well, first there is field 5, the variable we want to
predict. It's common knowledge among people who analyze
this type of data that results are better if you work with the
log of the values, but I didn't discover this until later. Was it
mentioned to you?
Data Miner: No.
Statistician: But surely you heard about what happened to field
4? It's supposedto be measured on a scale from 1 to 10, with
0 indicating a missing value, but becauseof a data entry
error, all 10's were changed into 0's. Unfortunately, since
some of the patients have missing values for this field, it's
impossible to say whether a 0 in this field is a real 0 or a 10.
Quite a few of the records have that problem.
Data Miner: Interesting. Were there any other problems?
Statistician: Yes, fields 2 and 3 are basically the same, but I
assumethat you probably noticed that.
Data Miner: Yes, but these fields were only weak predictors of
field 5.
Statistician: Anyway, given all those problems, I'm surprised
you were able to accomplish anything.
Data Miner: Thue, but my results are really quite good. Field 1
is a very strong predictor of field 5. I'm surprised that this
wasn't noticed before.
Statistician: What? Field 1 is just an identification number.
Data Miner: Nonetheless,my results speak for themselves.
Statistician: Oh, no! I just remembered. We assignedID
numbers after we sorted the records based on field 5. There is
a strong connection, but it's meaningless.Sorry.
22 Chapter 2 Data
datasetcontaining
Table2,1.A sample information.
student
StudentID Year Grade Point Average (GPA)
Although record-based data sets are common, either in flat files or rela-
tional database systems, there are other important types of data sets and
systems for storing data. In Section 2.I.2,we will discusssome of the types of
data sets that are commonly encountered in data mining. However, we first
consider attributes.
2.L Types of Data 23
What Is an attribute?
sure it. In other words, the values used to represent an attribute may have
properties that are not properties of the attribute itself, and vice versa. This
is illustrated with two examples.
Example 2.3 (Employee Age and ID Number). Two attributes that
might be associatedwith an employee are ID and age (in years). Both of these
attributes can be represented as integers. However, while it is reasonableto
talk about the average age of an employee, it makes no sense to talk about
the average employee ID. Indeed, the only aspect of employees that we want
to capture with the ID attribute is that they are distinct. Consequently,the
only valid operation for employeeIDs is to test whether they are equal. There
is no hint of this limitation, however, when integers are used to represent the
employee ID attribute. For the age attribute, the properties of the integers
used to represent age are very much the properties of the attribute. Even so,
the correspondence is not complete since, for example, ages have a maximum'
while integers do not.
Example 2.4 (Length of Line Segments). Consider Figure 2.1, which
shows some objects-line segments and how the length attribute of these
objects can be mapped to numbers in two different ways. Each successive
line segment, going from the top to the bottom, is formed by appending the
topmost line segment to itself. Thus, the second line segment from the top is
formed by appending the topmost line segment to itself twice, the third line
segment from the top is formed by appending the topmost line segment to
itself three times, and so forth. In a very real (physical) sense, all the line
segmentsare multiples of the first. This fact is captured by the measurements
on the right-hand side of the figure, but not by those on the left hand-side.
More specifically, the measurement scale on the left-hand side captures only
the ordering of the length attribute, while the scale on the right-hand side
captures both the ordering and additivity properties. Thus, an attribute can be
measuredin a way that does not capture all the properties of the attribute. t
The type of an attribute should tell us what properties of the attribute are
reflected in the values used to measure it. Knowing the type of an attribute
is important because it tells us which properties of the measured values are
consistent with the underlying properties of the attribute, and therefore, it
allows us to avoid foolish actions, such as computing the average employee ID.
Note that it is common to refer to the type of an attribute as the type of a
measurement scale.
2.1 Types of Data 25
----> 1
----> 2
--> 3
--> 5
of lengthsto numbers
A mapping
propertiesof
rensth. nffii?;'fi"::ilin""till8in*o
Figure2.1.Themeasurement
ofthelength
of linesegments
ontwodifferent
scales
of measurement.
1. Distinctness : and *
3. Addition * and -
4. Multiplication x and /
this does not mean that the operations appropriate for one attribute type are
appropriate for the attribute types above it.
Nominal and ordinal attributes are collectively referred to as categorical
or qualitative attributes. As the name suggests,qualitative attributes, such
as employeeID, lack most of the properties of numbers. Even if they are rep-
resented by numbers, i.e., integers, they should be treated more like symbols.
The remaining two types of attributes, interval and ratio, are collectively re-
ferred to as quantitative or numeric attributes. Quantitative attributes are
represented by numbers and have most of the properties of numbers. Note
that quantitative attributes can be integer-valued or continuous.
The types of attributes can also be described in terms of transformations
that do not changethe meaning of an attribute. Indeed, S. Smith Stevens,the
psychologist who originally defined the types of attributes shown in Table 2.2,
defined them in terms of these permissible transformations. For example,
2.L Types of Data 27
Table2,3.Transformations
thatdefineattribute
levels,
Attribute
Typ" Tlansformation Comment
Nominal Any one-to-one mapping, €.g., & It all employee IIJ numbers are
permutation of values reassigned, it will not make any
differcnce
()rdinal An order-preserving change of An attribute encompassingthe
values. i.e.. notion of good, better, best can
new _ualue : f (old_ualue), be represented equally well by
where / is a monotonic function. the values {1,2,3} or by
{ 0 . 5 ,1 , 1 0 } .
Interval new -ualue : a * old-talue I b, The Fahrenheit and Celsius
o. and b constants. temperature scales differ in the
Iocation of their zero value and
the size of a degree (unit).
Ratio new -ualue : a * ol,d-ua|ue Length can be measured in
meters or feet.
Asymmetric Attributes
Before providing details of specific kinds of data sets, we discuss three char-
acteristics that apply to many data sets and have a significant impact on the
data mining techniquesthat are used: dimensionality, sparsity, and resolution.
Sparsity For some data sets, such as those with asymmetric features, most
attributes of an object have values of 0; in many casesTfewer than 1% of
the entries are non-zero. In practical terms, sparsity is an advantage because
usually only the non-zero values need to be stored and manipulated. This
results in significant savings with respect to computation time and storage.
FurthermoreT some data mining algorithms work well only for sparse data.
Record Data
Much data mining work assumesthat the data set is a collection of records
(data objects), each of which consists of a fixed set of data fields (attributes).
See Figure 2.2(a). For the most basic form of record data, there is no explicit
relationship among records or data fields, and every record (object) has the
same set of attributes. Record data is usually stored either in flat files or in
relational databases. Relational databasesare certainly more than a collection
of records, but data mining often does not use any of the additional information
available in a relational database. Rather, the database serves as a convenient
place to find records. Different types of record data are described below and
are illustrated in Figure 2.2.
The Data Matrix If the data objects in a collection of data all have the
same fixed set of numeric attributes, then the data objects can be thought of as
points (vectors) in a multidimensional space,where each dimension represents
a distinct attribute describing the object. A set of such data objects can be
interpreted as an n'Lby n matrix, where there are rn rows, one for each object,
2.L Types of Data 31
Document1 0 0 2 o 0 0 2
Document2 0 7 0 0 0 0 0
Document3 0 I 0 0 2 2 0 o 0
Figure2.2, Different
variations
of record
data.
and n columns, one for each attribute. (A representationthat has data objects
as columns and attributes as rows is also fine.) This matrix is called a data
matrix or a pattern matrix. A data matrix is a variation of record data,
but becauseit consists of numeric attributes, standard matrix operation can
be applied to transform and manipulate the data. Therefore, the data matrix
is the standard data format for most statistical data. Figure 2.2(c) shows a
sample data matrix.
Graph-Based Data
Data with Objects That Are Graphs If objects have structure, that
is, the objects contain subobjects that have relationships, then such objects
are frequently represented as graphs. For example, the structure of chemical
compounds can be representedby a graph, where the nodes are atoms and the
links between nodes are chemical bonds. Figure 2.3(b) shows a ball-and-stick
diagram of the chemical compound benzene,which contains atoms of carbon
(black) and hydrogen (gray). A graph representation makes it possible to
determine which substructures occur frequently in a set of compounds and to
ascertain whether the presenceof any of these substructures is associatedwith
the presenceor absenceof certain chemical properties, such as melting point
or heat of formation. Substructure mining, which is a branch of data mining
that analyzes such data, is consideredin Section 7.5.
2.1 Types of Data 33
UsefulLinks:
. Bbuoq@hv -
Knowledge Discovery and
. mer Useful Web sib
Data Mining Bibliography
(GeB up&td frequenily, so dsironenl)
o ACM SIGmD
o onuqqets
o fteDahh€
Ordered Data
For some types of data, the attributes have relationships that involve order
in time or space. Different types of ordered data are described next and are
shown in Figure 2.4.
25
20
E
15
E
10
1983 19& 1985 1986 1987 1984 1989 1990 1991 lS2 1993 1994
y€r
Figure2.4. Different
variations
ofordered
data.
C2, and C3; and five different items A, B, C, D, and E. In the top table,
each row corresponds to the items purchased at a particular time by each
customer. For instance, at time f3, customer C2 purchased items A and D. In
the bottom table, the same information is displayed, but each row corresponds
to a particular customer. Each row contains information on each transaction
involving the customer, where a transaction is consideredto be a set of items
and the time at which those items were purchased. For example, customer C3
bought items A and C at time t2.
2.1 Types of Data 35
Time Series Data Time series data is a special type of sequential data
in which each record is a time series, i.e., a series of measurementstaken
over time. For example, a financial data set might contain objects that are
time seriesof the daily prices of various stocks. As another example, consider
Figure 2.4(c), which shows a time series of the averagemonthly temperature
for Minneapolis during the years 1982 to 1994. When working with temporal
data, it is important to consider temporal autocorrelation; i.e., if two
measurements are close in time, then the values of those measurements are
often very similar.
Spatial Data Some objects have spatial attributes, such as positions or ar-
eas, as well as other types of attributes. An example of spatial data is weather
data (precipitation, temperature, pressure) that is collected for a variety of
geographical locations. An important aspect of spatial data is spatial auto-
correlation; i.e., objects that are physically close tend to be similar in other
ways as well. Thus, two points on the Earth that are close to each other
usually have similar values for temperature and rainfall.
Important examples of spatial data are the science and engineering data
sets that are the result of measurements or model output taken at regularly
or irregularly distributed points on a two- or three-dimensional grid or mesh.
For instance, Earth science data sets record the temperature or pressure mea-
sured at points (grid cells) on latitude-longitude spherical grids of various
resolutions,e.8., 1o by 1o. (See Figure 2.4(d).) As another example, in the
simulation of the flow of a gas, the speed and direction of flow can be recorded
for each grid point in the simulation.
36 Chapter 2 Data
Most data mining algorithms are designed for record data or its variations,
such as transaction data and data matrices. Record-oriented techniques can
be applied to non-record data by extracting features from data objects and
using these features to create a record corresponding to each object. Consider
the chemical structure data that was described earlier. Given a set of common
substructures, each compound can be represented as a record with binary
attributes that indicate whether a compound contains a specific substructure.
Such a representation is actually a transaction data set, where the transactions
are the compounds and the items are the substructures.
In some cases, it is easy to represent the data in a record format, but
this type of representation does not capture all the information in the data.
Consider spatio-temporal data consisting of a time series from each point on
a spatial grid. This data is often stored in a data matrix, where each row
representsa location and each column represents a particular point in time.
However, such a representation does not explicitly capture the time relation-
ships that are present among attributes and the spatial relationships that
exist among objects. This does not mean that such a representation is inap-
propriate, but rather that these relationships must be taken into consideration
during the analysis. For example, it would not be a good idea to use a data
mining technique that assumesthe attributes are statistically independent of
one another.
It is unrealistic to expect that data will be perfect. There may be problems due
to human error, limitations of measuring devices,or flaws in the data collection
process. Values or even entire data objects may be missing. In other cases,
there may be spurious or duplicate objects; i.e., multiple data objects that all
correspond to a single "real" object. For example, there might be two different
records for a person who has recently lived at two different addresses. Even if
all the data is present and "looks fine," there may be inconsistencies-a person
has a height of 2 meters, but weighs only 2 kilograms.
In the next few sections,we focus on aspectsofdata quality that are related
to data measurement and collection. We begin with a definition of measure-
ment and data collection errors and then consider a variety of problems that
involve measurement error: noise, artifacts, bias, precision, and accuracy. We
conclude by discussing data quality issuesthat may involve both measurement
and data collection problems: outliers, missing and inconsistent values, and
duplicate data.
The term measurement error refers to any problem resulting from the mea-
surement process. A common problem is that the value recorded differs from
the true value to some extent. For continuous attributes, the numerical dif-
ference of the measured and true value is called the error. The term data
collection error refers to errors such as omitting data objects or attribute
values, or inappropriately including a data object. For example, a study of
animals of a certain species might include animals of a related speciesthat are
similar in appearanceto the speciesof interest. Both measurementerrors and
data collection errors can be either systematic or random.
We will only consider general types of errors. Within particular domains,
there are certain types of data errors that are commonplace, and there ofben
exist well-developed techniques for detecting and/or correcting these errors.
For example, keyboard errors are common when data is entered manually, and
as a result, many data entry programs have techniques for detecting and, with
human intervention, correcting such errors.
2.5.Noise
Figure ina timeseries
context.
. i*.
i
^ +,
+
T+
T
iat
(a) Three groups of points. (b) With noise points (+) added.
more noise were added to the time series,its shape would be lost. Figure 2.6
shows a set of data points before and after some noise points (indicated by
'+'s) have been added. Notice that some of the noise points are intermixed
with the non-noise points.
The term noise is often used in connection with data that has a spatial or
temporal component. In such cases,techniques from signal or image process-
ing can frequently be used to reduce noise and thus, help to discover patterns
(signals) that might be "lost in the noise." Nonetheless,the elimination of
noise is frequently difficult, and much work in data mining focuses on devis-
ing robust algorithms that produce acceptable results even when noise is
present.
2.2 Data Quality 39
review the details of working with significant digits, as most readers will have
encountered them in previous courses, and they are covered in considerable
depth in science,engineering, and statistics textbooks.
Issuessuch as significant digits, precision, bias, and accuracy are sometimes
overlooked, but they are important for data mining as well as statistics and
science. Many times, data sets do not come with information on the precision
of the data, and furthermore, the programs used for analysis return results
without any such information. Nonetheless,without some understanding of
the accuracy of the data and the results, an analyst runs the risk of committing
serious data analysis blunders.
Outliers
Outliers are either (1) data objects that, in some sense,have characteristics
that are different from most of the other data objects in the data set, or
(2) values of an attribute that are unusual with respect to the typical values
for that attribute. Alternatively, we can speak of anomalous objects or
values. There is considerableleeway in the definition of an outlier, and many
different definitions have been proposed by the statistics and data mining
communities. Furthermore, it is important to distinguish between the notions
of noise and outliers. Outliers can be legitimate data objects or values. Thus,
unlike noise, outliers may sometimes be of interest. In fraud and network
intrusion detection, for example, the goal is to find unusual objects or events
from among a large number of normal ones. Chapter 10 discussesanomaly
detection in more detail.
Missing Values
Ignore the Missing Value during Analysis Many data mining approaches
can be modified to ignore missing values. For example, suppose that objects
are being clustered and the similarity between pairs of data objects needs to
be calculated. If one or both objects of a pair have missing values for some
attributes, then the similarity can be calculated by using only the attributes
that do not have missing values. It is true that the similarity will only be
approximate, but unless the total number of attributes is small or the num-
ber of missing values is high, this degree of inaccuracy may not matter much.
Likewise, many classification schemes can be modified to work with missing
values.
Inconsistent Values
Duplicate Data
A data set may include data objects that are duplicates, or almost duplicates,
of one another. Many people receive duplicate mailings becausethey appear
in a database multiple times under slightly different names. To detect and
eliminate such duplicates, two main issuesmust be addressed. First, if there
are two objects that actually represent a single object, then the values of
corresponding attributes may differ, and these inconsistent values must be
2.2 Data Quality 43
b
I
60 65 70 75 80 85 90 95
Year
Figure2.7, Conelation
of SSTdatabetween positive
pairsof years.Whiteareasindicate correlation.
Blackareasindicate
negative
correlation.
issuesat the measurementand data collection level, there are many issuesthat
are specific to particular applications and fields. Again, we consider only a few
of the general issues.
Relevance The available data must contain the information necessary for
the application. Consider the task of building a model that predicts the acci-
dent rate for drivers. If information about the age and gender of the driver is
omitted, then it is likely that the model will have limited accuracy unless this
information is indirectly available through other attributes.
Making sure that the objects in a data set are relevant is also challenging.
A common problem is sampling bias, which occurs when a sample does not
contain different types of objects in proportion to their actual occurrence in
the population. For example, survey data describesonly those who respond to
the survey. (Other aspects of sampling are discussedfurther in Section 2.3.2.)
Becausethe results of a data analysis can reflect only the data that is present,
sampling bias will typically result in an erroneous analysis.
Knowledge about the Data Ideally, data sets are accompanied by doc-
umentation that describes different aspects of the data; the quality of this
documentation can either aid or hinder the subsequentanalysis. For example,
if the documentation identifies several attributes as being strongly related,
these attributes are likely to provide highly redundant information, and we
may decide to keep just one. (Consider sales tax and purchase price.) If the
documentation is poor, however, and fails to tell us, for example, that the
missing values for a particular field are indicated with a -9999, then our analy-
sis of the data may be faulty. Other important characteristicsare the precision
of the data, the type of features (nominal, ordinal, interval, ratio), the scale
of measurement (e.g., meters or feet for length), and the origin of the data.
o Aggregation
o Sampling
o Dimensionality reduction
o Feature subset selection
o Feature creation
o Discretization and binarization
o Variable transformation
Roughly speaking, these items fall into two categories: selecting data ob-
jects and attributes for the analysis or creating/changing the attributes. In
both casesthe goal is to improve the data mining analysis with respect to
time, cost, and quality. Details are provided in the following sections.
A quick note on terminology: In the following, we sometimes use synonyms
for attribute, such as feature or variable, in order to follow common usage.
2.3.L Aggregation
Sometimes "lessis more" and this is the casewith aggregation, the combining
of two or more objects into a single object. Consider a data set consisting of
transactions (data objects) recording the daily sales of products in various
store locations (Minneapolis, Chicago, Paris, ...) for different days over the
courseof a year. SeeTable 2.4. One way to aggregatetransactions for this data
set is to replace all the transactions of a single store with a single storewide
transaction. This reducesthe hundreds or thousands of transactions that occur
daily at a specific store to a single daily transaction, and the number of data
objects is reduced to the number of stores.
An obvious issue is how an aggregatetransaction is created; i.e., how the
values of each attribute are combined acrossall the records correspondingto a
particular location to create the aggregatetransaction that representsthe sales
of a single store or date. Quantitative attributes, such as price, are typically
aggregated by taking a sum or an average. A qualitative attribute, such as
item, can either be omitted or summarized as the set of all the items that were
sold at that location.
The data in Table 2.4 can also be viewed as a multidimensional array,
where each attribute is a dimension. FYom this viewpoint, aggregation is the
46 Chapter 2 Data
Table2.4.
Datasetcontaining
information
about purchases.
customer
Ttansaction ID Item I Store Location Date
: :
L01r23 Watch Chicago 0e106/04
r0rl23 Battery Chicago 0e/06104
t0rr24 Shoes Minneapolis 0s106104
;
.5
6
z
z
Figure2.8. Histograms
of standard
deviation andyearlyprecipitation
formonthly in Australia
forthe
oeriod1982to 1993.
2.3.2 Sampling
Sampling Approaches
There are many sampling techniques, but only a few of the most basic ones
and their variations will be covered here. The simplest type of sampling is
simple random sampling. For this type of sampling, there is an equal prob-
ability of selecting any particular item. There are two variations on random
sampling (and other sampling techniques as well): (1) sampling without re-
placement-as each item is selected, it is removed from the set of all objects
that together constitute the population, and (2) sampling with replace-
ment-objects are not removed from the population as they are selected for
the sample. In sampling with replacement,the sameobject can be picked more
than once. The samples produced by the two methods are not much different
when samples are relatively small compared to the data set size, but sampling
with replacement is simpler to analyze since the probability of selecting any
object remains constant during the sampling process.
When the population consists of different types of objects, with widely
different numbers of objects, simple random sampling can fail to adequately
represent those types of objects that are less frequent. This can cause prob-
lems when the analysis requires proper representation of all object types. For
example, when building classification models for rare classes,it is critical that
the rare classesbe adequately representedin the sample. Hence, a sampling
scheme that can accommodate differing frequencies for the items of interest is
needed. Stratified sampling, which starts with prespecified groups of ob-
jects, is such an approach. In the simplest version, equal numbers of objects
are drawn from each group even though the groups are ofdifferent sizes. In an-
other variation, the number of objects drawn from each group is proportional
to the size of that group.
Example 2.8 (Sampling and Loss of Information). Once a sampling
technique has been selected, it is still necessaryto choose the sample size.
Larger sample sizes increase the probability that a sample will be representa-
tive, but they also eliminate much of the advantage of sampling. Conversely,
with smaller sample sizes,patterns may be missedor erroneouspatterns can be
detected. Figure 2.9(a) shows a data set that contains 8000 two-dimensional
points, while Figures 2.9(b) and 2.9(c) show samplesfrom this data set of size
2000 and 500, respectively. Although most of the structure of this data set is
present in the sample of 2000 points, much of the structure is missing in the
sample of 500 points. r
2.3 Data Preprocessing 49
ii
Figure2.9. Example
ofthelossofstructure
withsampling,
oo
oo
(d
oo e
n
o o
o o Sample Size
Figure2.10.Finding points
representative from10groups.
Progressive Sampling
Data sets can have a large number of features. Consider a set of documents,
where each document is represented by a vector whose components are the
frequencies with which each word occurs in the document. In such cases,
2.3 Data Preprocessing 51
Another way to reduce the dimensionality is to use only a subset of the fea-
tures. While it might seemthat such an approach would lose information, this
is not the case if redundant and irrelevant features are present. Redundant
features duplicate much or all of the information contained in one or more
other attributes. For example, the purchaseprice of a product and the amount
of salestax paid contain much of the same information. Irrelevant features
Oontain almost no useful information for the data mining task at hand. For
instance, students' ID numbers are irrelevant to the task of predicting stu-
dents' grade point averages. Redundant and irrelevant features can reduce
classification accuracy and the quality of the clusters that are found.
While some irrelevant and redundant attributes can be eliminated imme-
diately by using common senseoI domain knowledge,selectingthe best subset
of features frequently requires a systematic approach. The ideal approach to
feature selection is to try all possible subsets of features as input to the data
mining aigorithm of interest, and then take the subset that produces the best
results. This method has the advantage of reflecting the objective and bias of
the data mining algorithm that will eventually be used. Unfortunately, since
the number of subsetsinvolving n attributes is 2n, such an approach is imprac-
tical in most situations and alternative strategies are needed. There are three
standard approachesto feature selection: embedded, filter, and wrapper.
2.3 Data Preprocessing 53
Wrapper approaches These methods use the target data mining algorithm
as a black box to find the best subset of attributes, in a way similar to that
of the ideal algorithm described above, but typically without enumerating all
possible subsets.
of a feature
Figure2,11.Flowchart subset process.
selection
Feature Weighting
Feature Extraction
The creation of a new set of features from the original raw data is known as
feature extraction. Consider a set of photographs, where each photograph
is to be classified according to whether or not it contains a human face. The
raw data is a set of pixels, and as such, is not suitable for many types of
classification algorithms. However, if the data is processed to provide higher-
level features, such as the presenceor absenceof certain types of edges and
areasthat are highly correlated with the presenceof human faces,then a much
broader set of classification techniques can be applied to this problem.
Unfortunately, in the sense in which it is most commonly used, feature
extraction is highly domain-specific. For a particular field, such as image
processing, various features and the techniques to extract them have been
developed over a period of time, and often these techniques have limited ap-
plicability to other fields. Consequently,whenever data mining is applied to a
relatively new area, a key task is the development of new features and feature
extraction methods.
56 Chapter 2 Data
(a) Two time series. (b) Noisy time series. (c) Power spectrum
of theFourier
Figure2.12.Application transform
to identify
theunderlying in timeseries
frequencies
data.
A totally different view of the data can reveal important and interesting fea-
tures. Consider, for example, time series data, which often contains periodic
patterns. If there is only a single periodic pattern and not much noise' then
the pattern is easily detected. If, on the other hand, there are a number of
periodic patterns and a significant amount of noise is present, then these pat-
terns are hard to detect. Such patterns can, nonetheless,often be detected
by applying a Fourier transform to the time seriesin order to change to a
representation in which frequency information is explicit. In the example that
follows, it will not be necessary to know the details of the Fourier transform.
It is enough to know that, for each time series,the Fourier transform produces
a new data object whose attributes are related to frequencies.
Example 2.10 (Fourier Analysis). The time series presented in Figure
2.I2(b) is the sum of three other time series,two of which are shown in Figure
2.12(a) and have frequenciesof 7 and 17 cycles per second, respectively. The
third time series is random noise. Figure 2.12(c) shows the power spectrum
that can be computed after applying a Fourier transform to the original time
series. (Informally, the power spectrum is proportional to the square of each
frequency attribute.) In spite ofthe noise, there are two peaks that correspond
to the periods of the two original, non-noisy time series. Again, the main point
is that better features can reveal important aspects of the data. I
2.3 Data Preprocessing 57
Feature Construction
ofa categorical
Table2.5.Conversion tothreebinaryattributes.
attribute
Catesorical Value Integer Value fi1 Ta
awful 0 0 0 0
poor I 0 0 1
OK 2 0 1 0
good 3 0 1 1
great 4 1 0 0
ofa categorical
Table2.6. Conversion attribute binaryattributes.
tofiveasymmetric
Categorical Value Integer Value :x7 u5 tr4
awtuL 0 1 U 0 0 0
poor I 0 1 0 0 0
OK 2 0 0 1 0 0
good 3 0 0 0 1 0
great 4 0 0 0 0 1
Binarization
k
er:l piilogzpij,
i:l
t :1.
tr.
.ti
i..
.ri
I i..
t r;.
!, .3
s- 3' l. 'l
.;.
! ?r
2.1
10 15 10 15 20
i :-i
i| * ' . . .
I .i.
i .'T
i: i.',i.
:g
| ., '
I ...
!:.
i i,:,
10 15 20
Figure2.13. Different
discretization
techniques.
Categorical attributes can sometimes have too many values. If the categorical
attribute is an ordinal attribute, then techniques similar to those for con-
tinuous attributes can be used to reduce the number of categories. If the
categorical attribute is nominal, however, then other approachesare needed.
Consider a university that has a large number of departments. Consequently,
a department name attribute might have dozens of different values. In this
situation, we could use our knowledge of the relationships among different
departments to combine departments into larger groups, such as eng'ineering,
soc'ialsciences,or biological sc'iences.If domain knowledge does not serve as
a useful guide or such an approach results in poor classification performance,
then it is necessaryto use a more empirical approach, such as grouping values
2.3 Data Preprocessing 63
-----xg -----
::"""*S.;t-
:"-,$
Figure2.14.Discretizing
r andy attributes
forfourgroups
(classes)
of points.
Simple Functions
Normalization or Standardization
2.4.L Basics
Definitions
TYansformations
Table2.7. Similarity
anddissimilarity
forsimple
attributes
Attribute Dissimilarity Similaritv
T'ype
)_ 0 1fr:A I \fr:y
Nominal 11, - s:
I if rly 0 ifnly
d:lr-all("-t)
Ordinal (valuesmapped to integers 0 to n-1 s:I-d
where n is the number of values)
Interval or Ratio d:lr-al s:-d, ' s : ; i ,L t a s:e-o,
^ - 1 d-min-d
-
Distances
We first present some examples, and then offer a more formal description of
distancesin terms of the properties common to all distances. The Euclidean
distance, d, between two points, x and y, in one-, two-, three-, or higher-
dimensional space, is given by the following familiar formula:
n
\-r - a k\)t ' ,
d(*, y) : ) \tn (2.1)
K=l
(, ,\
d(x, y) :
(Lor-orr)''"
where r is a parameter. The following are the three most common examples
of Minkowski distances.
d(*, Y) : (2.3)
J* (U'wr-rrt')"'
The r parameter should not be confused with the number of dimensions (at-
tributes) n. The Euclidean, Manhattan, and supremum distances are defined
for all values of n: I,2,3,..., and specify different ways of combining the
differencesin each dimension (attribute) into an overall distance.
Tables 2.10 and 2.11, respectively, give the proximity matrices for the L1
and Loo distances using data from Table 2.8. Notice that all these distance
matrices are symmetric; i.e., the ijth entry is the same as the jith entry. In
Table 2.9, for instance, the fourth row of the first column and the fourth
column of the first row both contain the value 5.1.
Distances, such as the Euclidean distance, have some well-known proper-
ties. If d(*, y) is the distance between two points, x and y, then the following
properties hold.
1. Positivity
123456
X
Figure
2.15.Four points.
two-dimensional
Table2.10.L1distance
matrix
forTable
2.8. Table
2,11.L- distance
matrix
forTable
2.8.
L1 p1 p2 p3 p4 L- pl p2 p3 p4
pl 0.0 4.0 4.0 6.0 p1 0.0 2.0 3.0 5.0
p2 4.0 0.0 2.0 4.0 p2 2.0 0.0 1 . 0 3.0
p3 4.0 2.0 0.0 2.0 p3 3.0 1.0 0.0 2.0
p4 6.0 4.0 2.0 0.0 p4 5.0 3.0 2.0 0 . 0
2. Symmetry
d(*,Y) : d(Y,x) for all x and Y.
3. T[iangle Inequality
d(x,z) < d(*, y) + d(y, z) for all points x, y,, and z.
Measures that satisfy all three properties are known as metrics. Some
people only use the term distance for dissimilarity measuresthat satisfy these
properties, but that practice is often violated. The three properties described
here are useful, as well as mathematically pleasing. AIso, if the triangle in-
equality holds, then this property can be used to increasethe efficiencyof tech-
niques (including clustering) that depend on distancespossessingthis property.
(SeeExercise 25.) Nonetheless,many dissimilarities do not satisfy one or more
of the metric properties. We give two examples of such measures.
72 Chapter 2 Data
For similarities, the triangle inequality (or the analogous property) typically
does not hold, but symmetry and positivity typically do. To be explicit, if
s(x, y) is the similarity between points x and y, then the typical properties of
similarities are the following:
x : (1,0,0,0,0,0,0,0,0,0)
y : ( 0 , 0 , 0 , 0 , 0 ,10,,0 , 01, )
J:-T-,+----;-:--4-:g
JOI TJ IOTJ I I ZTITU
Cosine Similarity
nored and various processing techniques are used to account for different forms
of the same word, differing document lengths, and different word frequencies.
Even though documents have thousands or tens of thousands of attributes
(terms), each document is sparsesince it has relatively few non-zero attributes.
(The normalizations used for documents do not create a non-zero entry where
there was azero entry; i.e., they preservesparsity.) Thus, as with transaction
data, similarity should not depend on the number of shared 0 values since
any two documents are likely to "not contain" many of the same words, and
therefore, if 0-0 matches are counted, most documents will be highly similar to
most other documents. Therefore, a similarity measure for documents needs
to ignores 0-0 matches like the Jaccard measure, but also must be able to
handle non-binary vectors. The cosine similarity, defined next, is one of the
most common measure of document similarity. If x and y are two document
vectors, then
:
cos(x,y) (, 7\
ffi,
where. indicatesthe vectordot product,x .y : D[:trplp, andllxll is the
lengthof vectorx, ll*ll : 1fD|:rr2r: 1/x4.
Example 2.18 (Cosine Similarity of Two Document Vectors). This
example calculates the cosine similarity for the following two data objects,
which might represent document vectors:
* : ( 3 ,2 , 0 , 5 , 0 , 0 , 0
2, 0,0)
y : ( 1 , 0 , 0 , 0 , 0 , 0 ,10, ,0 , 2 )
x . y : 3 i .1 * 2 x 0 * 0 * 0 * 5 x 0 * 0 x 0 f 0 * 0 * 0 * 0 * 2 * l * 0 x 0 * 0 x 2 : 5
l l x l:l roxo *oxo *2x2*oxo *oxo:6.48
l l y l:l :2.24
cos(x,y) : 0.31
I
Figule
2.16.Geometric
illustration measure.
ofthecosine
where x' : x/llxll and y/ : V lllyll. Dividing x and y by their lengths normal-
izesthem to have a length of 1. This means that cosinesimilarity does not take
the magnitude ofthe two data objects into account when computing similarity.
(Euclidean distance might be a better choice when magnitude is important.)
For vectors with a length of 1, the cosine measurecan be calculated by taking
a simple dot product. Consequently,when many cosine similarities between
objects are being computed, normalizing the objects to have unit length can
reduce the time required.
x.y
EJ(x,y): (2.e)
ll"ll'+llvll'-*.v
Correlation
The correlation between two data objects that have binary or continuous vari-
ables is a measure of the linear relationship between the attributes of the
objects. (The calculation of correlation between attributes, which is more
common, can be defined similarly.) More precisely, Pearson's correlation
2 .4 Measures of Similaritv and Dissimilaritv 77
covariance(x,y) StA
corr(x, y) : (2.10)
standard-deviation(x) xstandard-deviation(y) tr ta'
where we are using the following standard statistical notation and definitions:
1n
: s,s: :+ I("u
covariance(x,y) -z)(yx -9) (2.11)
n- r-. E:L
standard-deviation(x) : su :
fil@n-n)'
standard-deviation(y) : su:
;\l@*-vt'
1fl
i : I)-r*isthemeanofx
n4
k:1
1fl
g : 1fy*isthemeanofy
h:1
x: ( - 3 , 6 , 0 , 3 ,- 6 )
y: ( 1 ,- 2 , 0 , - 7 , 2 )
x: (3,6,0,3,6)
y : (1,2,0,L,2)
I
78 Chapter 2 Data
0.80 1.00
Figure plots
2.17.Scatter illustrating from-1 to1.
correlations
*: (-3, -2,-1, 0, I, 2, 3)
Y:(9, 4,1,0,1,4,9)
I
taking the dot product. Notice that this is not the same as the standardization
used in other contexts, where we make the transformations, r'* : (rp - ,) lt"
and y'r : (A* - T) I sa.
r and y be real numbers and /(t) be the real valued function, d(t) : t2. In
that case, the gradient reduces to the derivative and the dot product reduces
to multiplication. Specifically,Equation 2.L2 becomesEquation 2.13.
-1 01234
x
Figure
2.18.lllustration
ofBregman
divergence.
where E-1 is the inverse of the covariancematrix of the data. Note that the
covariance matrix E is the matrix whose ijth entry is the covariance of the ith
and jth attributes as defined by Equation 2.II.
Example 2.23. In Figure 2.19, there are 1000 points, whose r and g at-
tributes have a correlation of 0.6. The distance between the two large points
at the opposite ends of the long axis of the ellipse is 14.7 in terms of Euclidean
distance, but only 6 with respect to Mahalanobis distance. In practice, com-
puting the Mahalanobis distance is expensive,but can be worthwhile for data
whose attributes are correlated. If the attributes are relatively uncorrelated,
but have different ranges, then standardizing the variables is sufficient.
I
Figure2.19.Setoftwo-dimensionalpoints.
TheMahalanobis
distance thetwopointsrepre-
between
sentedbylargedotsis6;theirEuclidean
distance
is 14.7.
Unfortunately, this approach does not work well if some of the attributes
are asymmetric attributes. For example, if all the attributes are asymmetric
binary attributes, then the similarity measuresuggestedpreviously reducesto
the simple matching coefficient, a measure that is not appropriate for asym-
metric binary attributes. The easiest way to fix this problem is to omit asym-
metric attributes from the similarity calculation when their values are 0 for
both of the objects whose similarity is being computed. A similar approach
also works well for handling missing values.
In summary, Algorithm 2.7 is effective for computing an overall similar-
ity between two objects, x and y, with different types of attributes. This
procedure can be easily modified to work with dissimilarities.
Using Weights
In much of the previous discussion, all attributes were treated equally when
computing proximity. This is not desirable when some attributes are more im-
portant to the definition of proximity than others. To addressthese situations,
2.4 Measuresof Similaritv and Dissimilaritv 83
3: Compute the overall similarity between the two objects using the following for-
mula:
:$ffi
similarity(x,") (2.15)
/ n \1/'
d(x, y) : ( D,'ol"r - akl'I (2.r7)
\t:r /
objects have only a few of the characteristics described by the attributes, and
thus, are highly similar in terms of the characteristics they do not have. The
cosine,Jaccard, and extended Jaccard measuresare appropriate for such data.
There are other characteristics of data vectors that may need to be consid-
ered. Suppose, for example, that we are interested in comparing time series.
If the magnitude of the time seriesis important (for example, each time series
represent total sales of the same organization for a different year), then we
could use Euclidean distance. If the time series represent different quantities
(for example, blood pressureand oxygen consumption), then we usually want
to determine if the time series have the same shape, not the same magnitude.
Correlation, which uses a built-in normalization that accounts for differences
in magnitude and level, would be more appropriate.
In some cases,transformation or normalization of the data is important
for obtaining a proper similarity measure since such transformations are not
always present in proximity measures. For instance, time series may have
trends or periodic patterns that significantly impact similarity. Also, a proper
computation of similarity may require that time lags be taken into account.
Finally, two time seriesmay only be similar over specific periods of time. For
example, there is a strong relationship between temperature and the use of
natural gas, but only during the heating season.
Practical consideration can also be important. Sometimes, a one or more
proximity measuresare already in use in a particular field, and thus, others
will have answered the question of which proximity measures should be used.
Other times, the software package or clustering algorithm being used may
drastically limit the choices. If efficiency is a concern, then we may want to
choosea proximity measure that has a property, such as the triangle inequality,
that can be used to reduce the number of proximity calculations. (SeeExercise
25.)
However, if common practice or practical restrictions do not dictate a
choice,then the proper choiceof a proximity measurecan be a time-consuming
task that requires careful consideration of both domain knowledge and the
purpose for which the measure is being used. A number of different similarity
measures may need to be evaluated to see which ones produce results that
make the most sense.
particular, one of the initial motivations for defining types of attributes was
to be precise about which statistical operations were valid for what sorts of
data. We have presented the view of measurement theory that was initially
described in a classic paper by S. S. Stevens 179]. (Tables 2.2 and 2.3 are
derived from those presentedby Stevens [80].) While this is the most common
view and is reasonably easy to understand and apply, there is, of course,
much more to measurementtheory. An authoritative discussioncan be found
in a three-volume series on the foundations of measurement theory [63, 69,
81]. AIso of interest is a wide-ranging article by Hand [55], which discusses
measurement theory and statistics, and is accompanied by comments from
other researchers in the field. Finally, there are many books and articles that
describe measurement issuesfor particular areas of scienceand engineering.
Data quality is a broad subject that spans every discipline that uses data.
Discussionsof precision, bias, accuracy, and significant figures can be found
in many introductory science,engineering,and statistics textbooks. The view
of data quality as "fitness for use" is explained in more detail in the book by
Redman [76]. Those interested in data quality may also be interested in MIT's
Total Data Quality Management program [70, 84]. However, the knowledge
neededto deal with specific data quality issuesin a particular domain is often
best obtained by investigating the data quality practices of researchersin that
field.
Aggregation is a less well-defined subject than many other preprocessing
tasks. However, aggregationis one of the main techniquesused by the database
area of Online Analytical Processing(OLAP), which is discussedin Chapter 3.
There has also been relevant work in the area of symbolic data analysis (Bock
and Diday [a7]). One of the goals in this area is to summarize traditional record
data in terms of symbolic data objects whose attributes are more complex than
traditional attributes. Specifically, these attributes can have values that are
setsof values (categories),intervals, or sets of valueswith weights (histograms).
Another goal of symbolic data analysis is to be able to perform clustering,
classification,and other kinds of data analysis on data that consistsof symbolic
data objects.
Sampling is a subject that has been well studied in statistics and related
fields. Many introductory statistics books, such as the one by Lindgren [65],
have some discussionon sampling, and there are entire books devoted to the
subject, such as the classic text by Cochran [49]. A survey of sampling for
data mining is provided by Gu and Liu [54], while a survey of sampling for
databases is provided by Olken and Rotem [ZZ]. There are a number of other
data mining and database-related sampling referencesthat may be of interest,
86 Chapter 2 Data
including papers by Palmer and Faloutsos [74], Provost et al. [75], Toivonen
[82],and Zakiet al. [85].
In statistics, the traditional techniques that have been used for dimension-
ality reduction are multidimensional scaling (MDS) (Borg and Groenen [48],
Kruskal and Uslaner [6a]) and principal component analysis (PCA) (Jolliffe
[58]), which is similar to singular value decomposition (SVD) (Demmel [50]).
Dimensionality reduction is discussedin more detail in Appendix B.
Discretization is a topic that has been extensively investigated in data
mining. Some classification algorithms only work with categorical data, and
association analysis requires binary data, and thus, there is a significant moti-
vation to investigate how to best binarize or discretize continuous attributes.
For association analysis, we refer the reader to work by Srikant and Agrawal
[78], while some useful referencesfor discretization in the area of classification
include work by Dougherty et al. [51], Elomaa and Rousu [SZ], Fayyad and
Irani [53], and Hussain et al. [56].
Feature selectionis another topic well investigated in data mining. A broad
coverageof this topic is provided in a survey by Molina et al. [71] and two
books by Liu and Motada [66, 67]. Other useful paperc include those by Blum
and Langley 1461,Kohavi and John [62], and Liu et al. [68].
It is difficult to provide referencesfor the subject of feature transformations
becausepractices vary from one discipline to another. Many statistics books
have a discussionof transformations, but typically the discussionis restricted
to a particular purpose, such as ensuring the normality of a variable or making
sure that variables have equal variance. We offer two references: Osborne [73]
and Ttrkey [83].
While we have covered some of the most commonly used distance and
similarity measures,there are hundreds of such measuresand more are being
created all the time. As with so many other topics in this chapter, many of
these measuresare specificto particular fields; e.g., in the area of time seriessee
papers by Kalpakis et al. [59] and Keogh and Pazzani [61]. Clustering books
provide the best general discussions.In particular, seethe books by Anderberg
[45], Jain and Dubes [57], Kaufman and Rousseeuw[00], and Sneath and Sokal
1771.
Bibliography
[45] M. R. Anderberg. Cluster Analysis for Appli,cati,ons. Academic Press, New York, De-
cember 1973.
[461A. BIum and P. Langley. Selection of Relevant Features and Examples in Machine
Learning. Artificial Intellig ence, 97 (l=2) :245-27 l, 1997.
Bibliography 87
l47l H. H. Bock and E. Diday. Analysis of Sgmbolic Data: Exploratory Methods for Ertract-
ing Statistical Information from Complen Data (Studi,es in Classifi,cation, Data Analys'is,
and, Know ledge Org an'izat'ionl. Springer-Verlag Telos, January 2000.
[48] I. Borg and P. Groenen. Modern Multidimensional Scaling Theory and, Applications.
Springer-Verlag, February 1997.
[49] W. G. Cochran. Sampling Techniques. John Wiley & Sons, 3rd edition, JuJy 1977.
[50] J. W. Demmel. Applied, Numerical Linear Algebra. Society for Industrial & Applied
Mathematics, September 1997.
[51] J. Dougherty, R. Kohavi, and M. Sahami. Supervised and Unsupervised Discretization
of Continuous Features. In Proc. of the 12th Intl. Conf. on Machine Learni,ng, pages
L94-202, t995.
[52] T. Elomaa and J. Rousu. General and Efficient Multisplitting of Numerical Attributes.
M achine Learni,ng,36(3):201 244, 1999.
[53] U. M. Fayyad and K. B. Irani. Multi-interval discretization of continuousvalued at-
tributes for classification learning. In Proc. 13th Int. Joint Conf. on Arti;fi,cial Intelli,-
gence, pages lO22-L027. Morgan Kaufman, 1993.
154] F. H. Gaohua Gu and H. Liu. Sampling and Its Application in Data Mining: A Survey.
Technical Report TRA6/00, National University of Singapore, Singapore, 2000.
f55] D. J. Hand. Statistics and the Theory of Measurement. Jountal of the Rogal Statistical
Societg: Series A (Statistics in Societg),159(3):445-492, 1996.
[56] F. Hussain, H. Liu, C. L. Tan, and M. Dash. TRC6/99: Discretization: an enabling
technique. Technical report, National University of Singapore, Singapore, 1999.
[57j A. K. Jain and R. C. Dubes. Algorithrns for Clustering Data. Prentice Hall
Advanced Reference Series. Prentice Hall, March 1988. Book available online at
http: //www.cse.msu.edu/-jain/Clustering-Jain-Dubes.pdf.
[58] I. T. Jolliffe. Principal Cornponent Analys'is. Springer Verlag, 2nd edition, October
2002.
[59] K. Kalpakis, D. Gada, and V. Puttagunta. Distance Measures for Effective Clustering
of ARIMA Time-Series. In Proc. of the 2001 IEEE Intl. Conf. on Data Mini'ng, pages
273-280. IEEE Computer Society, 2001.
[60] L. Kaufman and P. J. Rousseeuw. Findi,ng Groups in Data: An Introduction to Cluster
Analysi.s. Wiley Series in Probability and Statistics. John Wiley and Sons, New York,
November 1990.
[61] E. J. Keogh and M. J. Pazzani. Scaling up dynamic time warping for datamining
applications. In KDD, pages 285-289, 2000.
[62] R. Kohavi and G. H. John. Wrappers for Feature Subset Selection. Artificial Intell'igence,
97 {I-2) :273-324, 1997.
f63] D. Krantz, R. D. Luce, P. Suppes, and A. Tversky. Foundations of Measurements:
Volume 1: Additiue and polgnomial representations. Academic Press, New York, 1971.
[64] J. B. Kruskal and E. M. Uslaner. Multiilimensional Scal'ing. Sage Publications, August
1978.
[65] B. W. Lindgren. Statistical Theory CRC Press, January 1993.
f66] H. Liu and H. Motoda, editors. Feature Ertract'ion, Constr-uction and Select'ion: A Data
Mini,ng Perspectiue. Kluwer International Series in Engineering and Computer Science,
453. Kluwer Academic Publishers, July 1998.
[67] H. Liu and H. Motoda. Feature Selection for Knowleilge Discouery and Data Mi'n-
ing. Kluwer International Series in Engineering and Computer Science, 454. Kluwer
Academic Publishers. Julv 1998.
88 Chapter 2 Data
168] H. Liu, H. Motoda, and L. Yu. Feature Extraction, Selection, and Construction. In
N. Ye, editor, The Handbook of Data Mi,ning, pages22 41. Lawrence Erlbaum Asso-
ciates, Inc., Mahwah, NJ, 2003.
[69] R. D. Luce, D. Krantz, P. Suppes, and A. Tversky. Foundat'ions of Measurements:
Volume 3: Representation, Ariomatizati,on, and Inuariance. Academic Press, New York,
1990.
[70] MIT Total Data Quality Management Program. web.mit.edu/tdqm/www/index.shtml,
2003.
171] L. C. Molina, L. Belanche, and A. Nebot. Feature Selection Algorithms: A Survey and
Experimental Evaluation. ln Proc. of the 2002 IEEE Intl. Conf. on Data M'ining,2OO2.
1721 F. Olken and D. Rotem. Random Sampling from Databases-A Survey. Stati,stics I
Comput'ing, 5(l) :25-42, March 1995.
[73] J. Osborne. Notes on the Use of Data TYansformations. Practical Assessment, Research
€i Eualuation, 28(6), 2002.
l74l C R. Palmer and C. Faloutsos. Density biased sampling: An improved method for data
mining and clusterin g. AC M SI G M O D Record, 29(2) :82-92, 2000.
[75] F. J. Provost, D. Jensen, and T. Oates. Efficient Progressive Sampling. In Proc. of the
5th IntI. Conf. on Knowled,ge Discouery and, Data Mining, pages 23-32, 1999.
[76] T. C. Redman. Data Qualitg: The Field Gui.d,e.Digital Press, January 2001.
l77l P H. A. Sneath and R. R. Sokal. Numerical Taronomy. FYeeman,San Francisco, 1971.
[78] R. Srikant and R. Agrawal. Mining Quantitative Association Rules in Large Relational
Tables. In Proc. of 1996 ACM-SIGMOD Intl. Conf. on Management of Data, pages
1-12, Montreal, Quebec, Canada, August 1996.
[79] S. S. Stevens. On the Theory of Scales of Measurement. Science, 103(2684):677-680,
June 1946.
[80] S. S. Stevens. Measurement. In G. M. Maranell, editor, Scali,ng: A Sourceboolefor
Behauioral Scientists, pages 22-4L Aldine Publishing Co., Chicago, 1974.
f81] P. Suppes, D. Krantz, R. D. Luce, and A. Tversky. Found,at'ionsof Measurements:
Volume 2: Geometrical, Threshold, and Probab'ilistic Representations. Academic Press,
New York, 1989.
f82] H. Toivonen. Sampling Large Databases for Association Rules. In VLDB96, pages
L34-I45. Morgan Kaufman, September 1996.
[83] J. W. Tukey. On the Comparative Anatomy of TYansformations. Annals of Mathematical
Stat'istics, 28(3):602-632, September 1957.
f84] R. Y. Wang, M. Ziad, Y. W. Lee, and Y. R. Wang. Data Quali,ty. The Kluwer In-
ternational Series on Advances in Database Systems, Volume 23. Kluwer Academic
Publishers, January 2001.
[85] M. J. Zaki, S. Parthasarathy, W. Li, and M. Ogihara. Evaluation of Sampling for Data
Mining of Association Rules. Technical Report TR617, Renssela,erPolytechnic Institute,
1996.
2.6 Exercises
1. In the initial example of Chapter 2, the statistician says, ((Yes,fields 2 and 3
are basically the same." Can you tell from the three lines of sample data that
are shown why she says that?
2.6 Exercises 89
0) Military rank.
(k) Distance from the center of campus.
(l) Density of a substance in grams per cubic centimeter.
(-) Coat check number. (When you attend an event, you can often give your
coat to someone who, in turn, gives you a number that you can use to
claim your coat when you leave.)
3. You are approached by the marketing director of a local company, who believes
that he has devised a foolproof way to measure customer satisfaction' He
explains his scheme as follows: "It's so simple that I can't believe that no one
has thought of it before. I just keep track of the number of customer complaints
for each product. I read in a data mining book that counts are ratio attributes,
and so, my measure of product satisfaction must be a ratio attribute. But
when I rated the products based on my new customer satisfaction measure and
showed them to my boss, he told me that I had overlooked the obvious, and
that my measure was worthless. I think that he was just mad because our best-
selling product had the worst satisfaction since it had the most complaints.
Could you help me set him straight?"
(a) Who is right, the marketing director or his boss? If you answered, his
boss, what would you do to fix the meaaure of satisfaction?
(b) What can you say about the attribute type of the original product satis-
faction attribute?
90 Chapter 2 Data
I
A few months later, you are again approached by the same marketing director
as in Exercise 3. This time, he has devised a better approach to measure the
extent to which a customer prefers one product over other, similar products. He
explains, "When we develop new products, we typically create several variations
and evaluate which one customers prefer. Our standard procedure is to give
our test subjects all ofthe product variations at one time and then ask them to
rank the product variations in order of preference. However, our test subjects
are very indecisive, especially when there are more than two products. As a
result, testing takes forever. I suggested that we perform the comparisons in
pairs and then use these comparisons to get the rankings. Thus, if we have
three product variations, we have the customers compare variations I and 2,
then 2 and 3, and finally 3 and 1. Our testing time with my new procedure
is a third of what it was for the old procedure, but the employees conducting
the tests complain that they cannot come up with a consistent ranking from
the results. And my boss wants the latest product evaluations, yesterday. I
should also mention that he was the person who came up with the old product
evaluation approach. Can you help me?"
(a) Is the marketing director in trouble? Will his approach work for gener-
ating an ordinal ranking of the product variations in terms of customer
preference? Explain.
(b) Is there a way to fix the marketing director's approach? More generally,
what can you say about trying to create an ordinal measurement scale
based on pairwise comparisons?
(c) For the original product evaluation scheme, the overall rankings of each
product variation are found by computing its average over all test subjects.
Comment on whether you think that this is a reasonable approach. What
other approaches might you take?
(a) How would you convert this data into a form suitable for association
analysis?
(b) In particular, what type of attributes would you have and how many of
them are there?
9 . Many sciences rely on observation instead of (or in addition to) designed ex-
periments. Compare the data quality issues involved in observational science
with those of experimental science and data mining.
10.Discuss the difference between the precision of a measurement and the terms
single and double precision, as they are used in computer science, typically to
represent floating-point numbers that require 32 and 64 bits, respectively.
1 1 . Give at least two advantages to working with data stored in text files instead
of in a binary format.
12. Distinguish between noise and outliers. Be sure to consider the following ques-
tions.
13. Consider the problem of finding the K nearest neighbors of a data object. A
programmer designs Algorithm 2.2 for this task.
(u) Describe the potential problems with this algorithm if there are duplicate
objects in the data set. Assume the distance function will only return a
distance of 0 for objects that are the same.
(b) How would you fix this problem?
14. The following attributes are measured for members of a herd of Asian ele-
phants: wei,ght, hei,ght, tusk length, trunk length, and ear area. Based on these
measurements, what sort of similarity mea"surefrom Section 2.4 would you use
to compare or group these elephants? Justify your answer and explain any
special circumstances.
92 Chapter 2 Data
1 5 . You are given a set of rn objects that is divided into K groups, where the ith
group is of size mi. If. the goal is to obtain a sample of size fl I ffi, what is
the difference between the following two sampling schemes? (Assume sampling
with replacement.)
16. Consider a document-term matrix, where tfii isthe frequency of the rith word
(term) in the jth document and m is the number of documents. Consider the
variable transformation that is defined by
tf'ti:tfti*nsffi, (2.18)
where dfi is the number of documents in which the i.th term appears, which
is known as the document frequency of the term. This transformation is
known as the inverse document frequency transformation.
(a) What is the effect of this transformation if a term occurs in one document?
In every document?
(b) What might be the purpose of this transformation?
18. This exercise compares and contrasts some similarity and distance measures.
(a) For binary data, the Ll distance corresponds to the Hamming distance;
that is, the number of bits that are different between two binary vectors.
The Jaccard similarity is a measure of the similarity between two bina,ry
vectors. Compute the Hamming distance and the Jaccard similarity be-
tween the following two binary vectors.
x: 0101010001
y : 0100011000
(c) Suppose that you are comparing how similar two organisms of different
species are in terms of the number of genes they share. Describe which
measure, Hamming or Jaccard, you think would be more appropriate for
comparing the genetic makeup of two organisms. Explain. (Assume that
each animal is represented as a binary vector, where each attribute is 1 if
a particular gene is present in the organism and 0 otherwise.)
(d) If you wanted to compare the genetic makeup of two organisms of the same
species, e.g., two human beings, would you use the Hamming distance,
the Jaccard coefficient, or a different measure of similarity or distance?
Explain. (Note that two human beings share > 99.9% of the same genes.)
1 9 . For the following vectors, x and y, calculate the indicated similarity or distance
measures.
1.4 1.4
1.2 1.2
o o
o 1 1
q
o
o 0.8 o 0.8
c
G 0.6 (5 0.6
o o
o.4 o 0.4
l,rJ I.JJ
o.2 o.2
CosineSimilarity Correlation
Figure
2.20.Graphs
forExercise
20.
satisfies the metric axioms given on page 70. ,4. and B are sets and A - B is
the set difference.
22. Discuss how you might map correlation values from the interval l-1,1] to the
interval [0,1]. Note that the type of transformation that you use might depend
on the application that you have in mind. Thus, consider two applications:
clustering time series and predicting the behavior of one time series given an-
other.
23. Given a similarity measure with values in the interval [0,1] describe two ways to
transform this similarity value into a dissimilarity value in the interval l0,oo].
,A Proximity is typically defined between a pair of objects.
(a) Define two ways in which you might define the proximity among a group
of objects.
(b) How might you define the distance between two sets of points in Euclidean
space?
(c) How might you define the proximity between two sets of data objects?
(Make no assumption about the data objects, except that a proximity
measure is defined between any pair of objects.)
25. You are given a set of points ,9 in Euclidean space, as well as the distance of
each point in ,S to a point x. (It does not matter if x e S.)
2.6, Exercises 95
(a) If the goal is to find all points within a specified distance e of point y,
y * x, explain how you could use the triangle inequality and the already
calculated distances to x to potentially reduce the number of distance
calculations necessary?Hint: The triangle inequality, d(x,z) < d(x,y)*
d(y,*), can be rewritten as d(x,y) 2 d(x, z) - d(y,z).
(b) In general, how would the distance between x and y affect the number of
distance calculations?
(c) Suppose that you can find a small subset of points ,5', from the original
data set, such that every point in the data set is within a specified distance
e of at least one of the points in ^91and that you also have the pairwise
distance matrix for 51 Describe a technique that uses this information to
compute, with a minimum of distance calculations, the set of all points
within a distance of B of a specified point from the data set.
26. Show that 1 minus the Jaccard similarity is a distance measure between two data
objects, x and y, that satisfies the metric axioms given on page 70. Specifically,
d(*,y) : 1- J(x,y).
27. Show that the distance measure defined as the angle between two data vectors,
x and y, satisfies the metric axioms given on page 70. Specifically, d(*, y) :
arccos(cos(x,y)).
,9, Explain why computing the proximity between two attributes is often simpler
than computing the similarity between two objects.
ExploringData
The previous chapter addressed high-level data issues that are important in
the knowledge discovery process. This chapter provides an introduction to
data exploration, which is a preliminary investigation of the data in order
to better understand its specific characteristics. Data exploration can aid in
selecting the appropriate preprocessing and data analysis techniques. It can
even address some of the questions typically answered by data mining. For
example, patterns can sometimes be found by visually inspecting the data.
Also, some of the techniques used in data exploration, such as visualization,
can be used to understand and interpret data mining results.
This chapter covers three major topics: summary statistics, visualization,
and On-Line Analytical Processing (OLAP). Summary statistics, such as the
mean and standard deviation of a set of values, and visualization techniques,
such as histograms and scatter plots, are standard methods that are widely
employed for data exploration. OLAP, which is a more recent development,
consists of a set of techniques for exploring multidimensional arrays of values.
OlAP-related analysis functions focus on various ways to create summary
data tables from a multidimensional data array. These techniques include
aggregating data either across various dimensions or across various attribute
values. For instance, if we are given sales information reported according
to product, location, and date, OLAP techniques can be used to create a
summary that describes the sales activity at a particular location by month
and product category.
The topics covered in this chapter have considerable overlap with the area
known as Exploratory Data Analysis (EDA), which was created in the
1970s by the prominent statistician, John Tirkey. This chapter, like EDA,
places a heavy emphasis on visualization. Unlike EDA, this chapter does not
include topics such as cluster analysis or anomaly detection. There are two
98 Chapter 3 Exploring Data
reasonsfor this. First, data mining views descriptive data analysis techniques
as an end in themselves,whereasstatistics, from which EDA originated, tends
to view hypothesis-basedtesting as the final goal. Second, cluster analysis
and anomaly detection are large areas and require full chapters for an in-
depth discussion. Hence, cluster analysis is coveredin Chapters 8 and 9, while
anomaly detection is discussedin Chapter 10.
The sepals of a flower are the outer structures that protect the more fragile
parts of the flower, such as the petals. In many flowers, the sepals are green,
and only the petals are colorful. For Irises, however, the sepals are also colorful.
As illustrated by the picture of a Virginica Iris in Figure 3.1, the sepals of an
Iris are larger than the petals and are drooping, while the petals are upright.
Figure3.1. Picture
of lrisVirginica.
RobertH. Mohlenbrock@ USDA-NRCS PLANTS Database/
USDANRCS.1995.Northeast wetland guide
flora:Fieldoffice to plant
species.
Northeast
National
Technical
Center,
Chester,PA.Background removed.
Given a set of unordered categorical values, there is not much that can be done
to further characterize the values except to compute the frequency with which
each value occurs for a particular set of data. Given a categorical attribute r,
which can take values {rt,. . . ,1ri,. .. u7r}and a set of rn objects, the frequency
of a value u; is defined as
The mode of a categorical attribute is the value that has the highest frequency.
100 Chapter 3 Exploring Data
Table sizeforstudents
3.1.Class ina hypothetical
college.
Class Size FYequency
freshman t40 0.33
sophomore 160 0.27
junior 130 0.22
senior 170 0.18
Categorical attributes often, but not always, have a small number of values,
and consequently, the mode and frequencies of these values can be interesting
and useful. Notice, though, that for the Iris data set and the class attribute,
the three types of flower all have the same frequency, and therefore, the notion
of a mode is not interesting.
For continuous data, the mode, as currently defined, is often not useful
becausea single value may not occur more than once. Nonetheless,in some
cases,the mode may indicate important information about the nature of the
values or the presenceof missing values. For example, the heights of 20 people
measuredto the nearest millimeter will typically not repeat, but if the heights
are measured to the nearest tenth of a meter, then some people may have the
same height. Also, if a unique value is used to indicate a missing value, then
this value will often show up as the mode.
3.2.2 Percentiles
Table3.2. Percentiles
forsepal
length,
sepal petallength,
width, andpetalwidth.(Allvalues
arein
centimeters.)
Percentile Sepal Length Sepal Width Petal Length Petal Width
0 4.3 2.0 1.0 0.1
10 4.8 2.5 t.4 0.2
20 5.0 2.7 r.o 0.2
30 5.2 2.8 t.7 0.4
40 D.r) 3.0 3.9 7.2
50 5.8 3.0 4.4 1.3
60 6.1 3.1 4.6 -t.l)
70 6.3 3.2 5.0 1.8
80 6.6 3.4 o.4 1.9
90 6.9 3.6 c.6 2.2
100 7.9 4.4 6.9 2.5
For continuous data, two of the most widely used summary statistics are the
mean and median, which are measures of the locati,on of a set of values.
Considera set of nl objects and an attribute r. Let {r1,...,r^) be the
attribute values of r for these zn objects. As a concrete example, these values
might be the heights of rn children. Let {rg1,...,:xOd} representthe values
of z after they have been sorted in non-decreasingorder. Thus, ro): min(z)
and r1*1: max(r). Then, the mean and median are defined as follows:
* ^e^ a- /n- (\ r ) : I := I
m \-..
*L*' (3.2)
m,
_ ^r:^- / \ :
meoran(r/ f *,r*rl i f r n i s o d d , i . e . ,r n : 2 r * |
(3.3)
1 !.'i*,' * r 1 " + r 1 ) i f r n i s e v e n ,i ' e ' , m : 2 r
\ 2\*(r)
To summarize, the median is the middle value if there are an odd number
of values, and the average of the two middle values if the number of values
is even. Thus, for seven values, the median is 1141,while for ten values, the
median is |(r15; + rfol).
LO2 Chapter 3 Exploring Data
Example 3.4. The means, medians, and trimmed means (p : 20%) of the
four quantitative attributes of the Iris data are given in Table 3.3. The three
measuresof location have similar values except for the attribute petal length.
andmedians
Table3.3. Means forsepallength, andpetalwidth.(Allvalues
sepalwidth,petallength,
areincentimeters.)
Measure Sepal Length Sepal Width Petal Length Petal Width
mean 5.84 3.05 3.76 I 20
median 5.80 3.00 4.35 1.30
trimmed mean (20To) 5. (9 3.02 3.72 r.72
Another set of commonly used summary statistics for continuous data are
those that measure the dispersion or spread of a set of values. Such measures
indicate if the attribute values are widely spread out or if they are relatively
concentrated around a single point such as the mean.
The simplest measure of spread is the range, which, given an attribute r
with a set of rn values {rr, . . . , r*}, is defined as
Table3.4.Range,
standard (std),
deviation absolute
average (AAD),
difference median
absolute
difier-
ence(MAD),andinterquartile
range (lQR)forsepal
length,
sepal petallength,
width, andpetal
width.
(Allvalues
areincentimeters.)
Measure Sepal Length Sepal Width Petal Length Petal Width
range 3.6 ,A 5.9 ,A
1m
variance(z)- s7: --- \-(2, - z)2 (3.5)
m,-lz-/''
The mean can be distorted by outliers, and since the variance is computed
using the mean, it is also sensitive to outliers. Indeed, the variance is particu-
larly sensitive to outliers since it uses the squared difference between the mean
and other values. As a result, more robust estimates of the spread of a set
of values are often used. Following are the definitions of three such measures:
the absolute average deviation (AAD), the median absolute deviation
(MAD), and the interquartile range(IQR). Table 3.4 shows these measures
for the Iris data set.
1ffi
AAD(z):'tl*i-nl (3.6)
m-
(3.e)
1
r \n-
covariance(ri, r j) ., - ri)\rki - ri), (3.11)
v 2 - \ z - t l\rnt.
K:1
where rpi arrd,rkj arethe values of the ith andj'h attributes for the kth object.
Notice that covariance(r6,rt) : variance(r1). Thus, the covariancematrix has
the variances of the attributes along the diagonal.
The covariance of two attributes is a measure of the degree to which two
attributes vary together and depends on the magnitudes of the variables. A
value near 0 indicates that two attributes do not have a (linear) relationship,
but it is not possible to judge the degree of relationship between two variables
by looking only at the value of the covariance. Because the correlation of two
attributes immediately gives an indication of how strongly two attributes are
(linearly) related, correlation is preferred to covariance for data exploration.
(AIso see the discussionof correlation in Section 2.4.5.) The ijth entry of the
correlation matrix R, is the correlation between I'he ith and jth attributes
of the data. If rt arrd.rj are the i,th and jth attributes, then
,j) : **Xy-f,
ri.j : corcelntion(r6, (3.12)
3.3 Visualization 105
where s2 and sy are the variances of r; and rjj respectively. The diagonal
entries of R are correlation(u,rt): 1, while the other entries are between
-1 and 1. It is also useful to consider correlation matrices that contain
the
pairwise correlations of objects instead of attributes.
3.3 Visualization
Data visualization is the display of information in a graphic or tabular format.
Successfulvisualization requires that the data (information) be converted into
a visual format so that the characteristics of the data and the relationships
among data items or attributes can be analyzed or reported. The goal of
visualization is the interpretation of the visualized information by a person
and the formation of a mental model of the information.
In everyday life, visual techniques such as graphs and tables are often the
preferred approach used to explain the weather, the economy, and the results
of political elections. Likewise, while algorithmic or mathematical approaches
are often emphasized in most technical disciplines-data mining included-
visual techniques can play a key role in data analysis. In fact, sometimes the
use of visualization techniques in data mining is referred to as visual data
mining.
15
10
Temp
Longitude
(SST)forJuly,1982.
Temperature
Figure3,2. SeaSurface
is easy to see that the ocean temperature is highest at the equator and lowest
at the poles.
Another general motivation for visualization is to make use of the domain
knowledge that is "locked up in people's heads." While the use of domain
knowledgeis an important task in data mining, it is often difficult or impossible
to fully utilize such knowledgein statistical or algorithmic tools. In some cases,
an analysis can be performed using non-visual tools, and then the results
presentedvisually for evaluation by the domain expert. In other cases,having
a domain specialist examine visualizations of the data may be the best way
of finding patterns of interest since, by using domain knowledge, a person can
often quickly eliminate many uninteresting patterns and direct the focus to
the patterns that are important.
relative positions of the objects. Likewise, if there are two or three continuous
attributes that are taken as the coordinates ofthe data points, then the result-
ing plot often gives considerable insight into the relationships of the attributes
and the data points becausedata points that are visually close to each other
have similar values for their attributes.
In general, it is difficult to ensure that a mapping of objects and attributes
will result in the relationships being mapped to easily observed relationships
among graphical elements. Indeed, this is one of the most challenging aspects
of visualization. In any given set of data, there are many implicit relationships,
and hence, a key challenge of visualization is to choose a technique that makes
the relationships of interest easily observable.
Arrangement
(rows)
Table3.5, A tableof nineobjects with (rows)
Table3.6.Atableofnineobjects withsix
sixbinary (columns).
attributes binaryattributes permuted
(columns) sothatthe
oftherowsandcolumns
relationships areclear.
r23456 613254
1 010110 4 11100
2 101001 2 11100 0
J 010110 6 11100 0
4 101001 8 11100 0
5 010110 r
00011 1
o 101001 3 00011 I
,7
010110 9 00011 1
8 101001 1 00011 1
q
010110 7 00011 I
3.3 Visualization 109
Selection
3.3.3 Techniques
Visualization techniques are often specialized to the type of data being ana-
lyzed. Indeed, new visualization techniques and approaches, as well as special-
ized variations ofexisting approaches,are being continuously created, typically
in responseto new kinds of data and visualization tasks.
Despite this specialization and the ad hoc nature of visualization, there are
some generic ways to classify visualization techniques. One such classification
is based on the number of attributes involved (1,2,3, or many) or whether the
data has some special characteristic, such as a hierarchical or graph structure.
Visualization methods can also be classifiedaccording to the type of attributes
involved. Yet another classification is based on the type of application: scien-
tific, statistical, or information visualization. The following discussionwill use
three categories: visualization of a small number of attributes, visualization of
data with spatial andf or temporal attributes, and visualization of data with
many attributes.
Most of the visualization techniques discussed here can be found in a wide
variety of mathematical and statistical packages, some of which are freely
available. There are also a number of data sets that are freely available on the
World Wide Web. Readers are encouraged to try these visualization techniques
as they proceed through the following sections.
3.3 Visualization 111
Stem and Leaf Plots Stem and leaf plots can be used to provide insight
into the distribution of one-dimensional integer or continuous data. (We will
assumeinteger data initially, and then explain how stem and leaf plots can be
applied to continuous data.) For the simplest type of stem and leaf plot, we
split the values into groups, where each group contains those values that are
the same except for the last digit. Each group becomesa stem, while the last
digits of a group are the leaves. Hence, if the values are two-digit integers,
e.g., 35, 36, 42, and 51, then the stems will be the high-order digits, e.g., 3,
4, and 5, while the leaves are the low-order digits, e.g., 1, 2, 5, and 6. By
plotting the stems vertically and leaves horizontally, we can provide a visual
representation of the distribution of the data.
Example 3.7. The set of integers shown in Figure 3.4 is the sepal length in
centimeters (multiplied by 10 to make the values integers) taken from the Iris
data set. For convenience,the values have also been sorted.
The stem and leaf plot for this data is shown in Figure 3.5. Each number in
Figure 3.4 is first put into one of the vertical groups-4, 5, 6, or 7-according
to its ten's digit. Its last digit is then placed to the right of the colon. Often,
especially if the amount of data is larger, it is desirable to split the stems.
For example, instead of placing all values whose ten's digit is 4 in the same
"bucket," the stem 4 is repeated twice; all values 40-44 are put in the bucket
corresponding to the first stem and all values 45-49 are put in the bucket
corresponding to the second stem. This approach is shown in the stem and
leaf plot of Figure 3.6. Other variations are also possible. I
Histograms Stem and leaf plots are a type of histogram, a plot that dis-
plays the distribution of values for attributes by dividing the possible values
into bins and showing the number of objects that fall into each bin. For cate-
gorical data, each value is a bin. If this results in too many values, then values
are combined in some way. For continuous attributes, the range of values is di-
vided into bins-typically, but not necessarily, of equal width-and the values
in each bin are counted.
LL2 Chapter 3 Exploring Data
43 44 44 44 45 46 46 46 46 47 47 48 48 48 48 48 49 49 49 49 49 49 50
50 50 50 50 50 50 50 50 50 51 51 51 51 51 51 51 51 51 52 52 52 52 53
54 54 54 54 54 54 55 55 55 55 55 55 55 56 56 56 56 56 56 57 57 57 57
57 57 57 57 58 58 58 58 58 58 58 59 59 59 60 60 60 60 60 60 61 61 61
61 61 61 62 62 62 62 63 63 63 63 63 63 63 63 63 64 64 64 64 64 64 64
65 65 65 65 65 66 66 67 67 67 67 67 67 67 67 68 68 68 69 69 69 69 70
7t 72 72 72 73 74 76 77 77 77 77 79
Figure datafromthelrisdataset.
length
3.4.Sepal
34444566667788888999999
0000000000Lt t l1tfl L222234444445555555666 66677 7 7 77 7 78888888999
A 000000 11 1 1 t1222233333333344444445555566777 77777 8889999
0t22234677779
Figure andleafplotforthesepal
3.5.Stem length
fromthelrisdataset.
3444
4: 566667788888999999
tr. 000000000011 1 111 1 1 122223444++4
tr. 555555566666677 777 7778888888999
A 000000 1 1 11 t t22223333333334444444
A 5 5 5 556 677 777 7 77 8889999
7. 0122234
677779
Figure3.6. Stemandleafplotforthesepallength
fromthelrisdatasetwhenbuckets
conesponding
todigitsaresplit.
Once the counts are available for each bin, a bar plot is constructed such
that each bin is representedby one bar and the area of each bar is proportional
to the number of values (objects) that fall into the corresponding range. If all
intervals are of equal width, then all bars are the same width and the height
of a bar is proportional to the number of values in the corresponding bin.
Exarnple 3.8. Figure 3.7 shows histograms (with 10 bins) for sepal length,
sepal width, petal length, and petal width. Since the shape of a histogram
can depend on the number of bins, histograms for the same data, but with 20
bins, are shown in Figure 3.8. I
(a) Sepal length. (b) Sepal width. (c) Petal length (d) Petal width.
Figure
3.7.Histograms (10bins).
offourlrisattributes
) 05 I 15 2
Petwnh
(a) Sepal length. (b) Sepal width. (c) Petal length. (d) Petal width.
change in scale of the g axis, and the shape of the histogram does not change.
Another common variation, especially for unordered categorical data, is the
Pareto histogram, which is the same as a normal histogram except that the
categories are sorted by count so that the count is decreasing from left to right.
Figure3.9.Two-dimensional of petallength
histogram andwidthinthelrisdataset.
Box Plots Box plots are another method for showing the distribution of the
values of a single numerical attribute. Figure 3.10 shows a labeled box plot for
sepal length. The lower and upper ends of the box indicate the 25th and 75th
percentiles,respectively,while the line inside the box indicates the value of the
50th percentile. The top and bottom lines of the tails indicate the 10'h and
90th percentiles. Outliers are shown by "+" marks. Box plots are relatively
compact, and thus, many of them can be shown on the same plot. Simplified
versions of the box plot, which take less space,can also be used.
Example 3.1-0. The box plots for the first four attributes of the Iris data
set are shown in Figure 3.11. Box plots can also be used to compare how
attributes vary between different classesof objects, as shown in Figure 3.12.
T
Pie Chart A pie chart is similar to a histogram, but is typically used with
categorical attributes that have a relatively small number of values. Instead of
showing the relative frequency of different values with the area or height of a
bar, as in a histogram, a pie chart usesthe relative area of a circle to indicate
relative frequency. Although pie charts are common in popular articles, they
3.3 Visualization 115
<- Outlier
<- 90thpercentile
g
o
o
<- 75rhpercentile E
o
<-- sOthpercentile o
<_ 25rhpercentile d
<- 1Othpercentile
Figure3.10. Description
of 3,11.Boxplotforlrisattributes.
Figure
boxplotforsepal
length,
=
LJ
l-
widh M L€ngth &lwidh Spalkngih Sepalwdlh told Length Peblwidh S6pal bngth S€palwdh bd bngth &l Wfr
3.12.Boxplots
Figure bylrisspecies.
ofattributes
are used less frequently in technical publications because the size of relative
areas can be hard to judge. Histograms are preferred for technical work.
Example 3.11. Figure 3.13 displays a pie chart that shows the distribution
of Iris speciesin the Iris data set. In this case, all three flower types have the
same freouencv. r
Versicolour
ofthetypesof lrisflowers.
Figure3.13.Distribution
the probability that a point is lessthan that value. For each observedvalue, an
empirical cumulative distribution function (ECDF) shows the fraction
of points that are less than this value. Since the number of points is finite, the
empirical cumulative distribution function is a step function.
Example 3.12. Figure 3.14 shows the ECDFs of the Iris attributes. The
percentiles of an attribute provide similar information. Figure 3.15 shows the
percentile plots of the four continuous attributes of the Iris data set from
Table 3.2. The reader should compare these figures with the histograms given
in Figures 3.7 and 3.8. r
Scatter Plots Most people are familiar with scatter plots to some extent,
and they were used in Section 2.4.5 to illustrate linear correlation. Each data
object is plotted as a point in the plane using the values of the two attributes
as r and y coordinates. It is assumed that the attributes are either integer- or
real-valued.
Example 3.13. Figure 3.16 shows a scatter plot for each pair of attributes
of the Iris data set. The different species of Iris are indicated by different
markers. The arrangement of the scatter plots of pairs of attributes in this
type of tabular format, which is known as a scatter plot matrix, provides
an organized way to examine a number of scatter plots simultaneously. I
3.3 Visualization LL7
Figure
3.14.Empirical
CDFsoffourlrisattributes.
0204608010
Figure plots
3.15.Percentile forsepal
length,
sepal petal
width, andpetal
length, width.
118 Chapter 3 Exploring Data
c\l
o
=
(d
q)
o-
X
XXX XX
X XXX Xg X
XffiX XXW
XX X XXg
o
-c v)
ct
c') (o
E
s9 at)
(U
o
o -c
o-
ol .9
(t
o
o_
x
(D
x x
x x (6
XX
x
X
X s <)
p x oo x <t>
X -c
X) f x o
x
x xg '= x
g XX
x (5
x
x
x x -E
X X o-
o <rt
o
Grt
o
+ ol =
@
ctt
o o u-
o goo ce9
t\ -C
o)
c
q)
@_
(d
o-
o
gr@
There are two main uses for scatter plots. First, they graphically show
the relationship between two attributes. In Section 2.4.5, we saw how scatter
plots could be used to judge the degreeof linear correlation. (SeeFigure 2.17.)
Scatter plots can also be used to detect non-linear relationships, either directly
or by using a scatter plot of the transformed attributes.
Second,when classlabels are available, they can be used to investigate the
degree to which two attributes separate the classes. If is possible to draw a
line (or a more complicated curve) that divides the plane defined by the two
attributes into separate regions that contain mostly objects of one class, then
it is possible to construct an accurate classifier based on the specified pair of
attributes. If not, then more attributes or more sophisticated methods are
neededto build a classifier. In Figure 3.16, many of the pairs of attributes (for
example, petal width and petal length) provide a moderate separation of the
Iris species.
Example 3.14. There are two separate approachesfor displaying three at-
tributes of a data set with a scatter plot. First, each object can be displayed
according to the values of three, instead of two attributes. F igure 3.17 shows a
three-dimensionalscatter plot for three attributes in the Iris data set. Second,
one of the attributes can be associatedwith some characteristic of the marker,
such as its size, color, or shape. Figure 3.18 shows a plot of three attributes
of the Iris data set, where one of the attributes, sepal width, is mapped to the
size of the marker. r
c 1.5
c
o
J1
o
o
@ 0.5
*:;'H*:
"**-i,S#".*f
0
plotolsepal
3.17.Three-dimensional
Figure scatter width,
sepal andpetalwidth.
length,
4
Petal Length
plotofpetallength
Figure3.18,Scatter petalwidth,
versus withthesizeofthemarker
indicating
sepal
width.
3.3 Visualization LzL
plotof SSTforDecember
Figure3.19,Contour 1998.
made at various points in time. In addition, data may have only a temporal
component, such as time seriesdata that gives the daily prices of stocks.
Surface Plots Like contour plots, surface plots use two attributes for'the
r and 3l coordinates. The third attribute is used to indicate the height above
I22 Chapter 3 Exploring Data
of a setof 12points.
Figure3,20.Density
the plane defined by the first two attributes. While such graphs can be useful,
they require that a value of the third attribute be defined for all combinations
of values for the first two attributes, at least over some range. AIso, if the
surface is too irregular, then it can be difficult to see all the information,
unless the plot is viewed interactively. Thus, surface plots are often used to
describe mathematical functions or physical surfaces that vary in a relatively
smooth manner.
Example 3.16. Figure 3.20 shows a surface plot of the density around a set
of 12 points. This example is further discussedin Section 9.3.3. r
Vector Field Plots In some data, a characteristic may have both a mag-
nitude and a direction associatedwith it. For example, consider the flow of a
substanceor the change of density with location. In these situations, it can be
useful to have a plot that displays both direction and magnitude. This type
of plot is known as a vector plot.
Example 3.17. Figure 3.2I shows a contour plot of the density of the two
smaller density peaks from Figure 3.20(b), annotated with the density gradient
vectors.
\\\lllltrr\\\lll.t
ttlllrttlll\
plotofthegradient
Figure3.21,Vector (change) forthebottom
indensity peaksof Figure
twodensity
3.20.
of plots that we have described so far. However, separate "slices" of the data
can be displayed by showing a set of plots, one for each month. By examining
the change in a particular area from one month to another, it is possible to
notice changesthat occur, including those that may be due to seasonalfactors.
Example 3.18. The underlying data set for this example consists of the av-
erage monthly sea level pressure (SLP) from 1982 to 1999 on a 2.5o by 2.5'
Iatitude-longitude grid. The twelve monthly plots of pressurefor one year are
shown in Figure 3.22. In this example, we are interested in slices for a par-
ticular month in the year 1982. More generally, we can consider slices bf the
data along any arbitrary dimension.
January February
April May
July August
plotsof sealevelpressure
Figure3,22.Monthly overthe12months
of 1982.
This section considersvisualization techniques that can display more than the
handful of dimensionsthat can be observedwith the techniquesjust discussed.
However, even these techniques are somewhat limited in that they only show
some aspects of the data.
Example 3.19. Figure 3.23 shows the standardized data matrix for the Iris
data set. The first 50 rows represent Iris flowers ofthe species Setosa, the next
50 Versicolour, and the last 50 Virginica. The Setosa flowers have petal width
and length well below the average, while the Versicolour flowers have petal
width and length around average. The Virginica flowers have petal width and
length above average. l
It can also be useful to look for structure in the plot of a proximity matrix
for a set of data objects. Again, it is useful to sort the rows and columns of
the similarity matrix (when class labels are known) so that all the objects of a
class are together. This allows a visual evaluation of the cohesivenessof each
class and its separation from other classes.
Example 3.20. Figure 3.24 shows the correlation matrix for the Iris data
set. Again, the rows and columns are organized so that all the flowers of a
particular species are together. The flowers in each group are most similar
L26 Chapter 3 Exploring Data
to each other, but Versicolour and Virginica are more similar to one another
than to Setosa. r
If class labels are not known, various techniques (matrix reordering and
seriation) can be used to rearrange the rows and columns of the similarity
matrix so that groups of highly similar objects and attributes are together
and can be visually identified. Effectively, this is a simple kind of clustering.
See Section 8.5.3 for a discussion of how a proximity matrix can be used to
investigate the cluster structure of data.
I
o
o
E
c
o
o
o
5
I
o
o
E
c
c)
o
o
f
(!
:-;4"r6$h
fp+;fr;o[, oro
(a) Star graph of Iris 150. (b) Chernoff face of Iris 150.
Figure3.27.Starcoordinates
graphandChernoff
faceofthe150thflowerofthelrisdataset.
il\
4J A 04 I\
AJ
2 34 5
,4\
=vz /4\
€--F
,,f\
<vv <t>
v/ \,/
51 52 53 54 55
,,,,\
Y-T-7
\l/
+\t/ ,,'1t\
\ |
\Z
//
,-T\
\ l,/
\'/
-/t\
s\t/
J0l 102 '103 104 105
Figute
3.28.Plotof15lrisflowers
using
starcoordinates.
/Cit /-F\
vv
'I
2345
l^\ Z\
/oro\ Iore\ /oro\ rD /oro\
\_-/ \:-/ \_-/ \:./
51 52 53 54 55
/\
/eto) f\:/,1 /oro\ /oro) /oro)
\, tl \, \7
\7 '104
101 102 103 105
Despite the visual appeal of these sorts of diagrams, they do not scalewell,
and thus, they are of limited use for many data mining problems. Nonetheless,
they may still be of use as a means to quickly compare small sets of objects
that have been selectedby other techniques.
ACCENT Principles The follov,ring are the ACCEN? principles for ef-
fective graphical display put forth by D. A. Burn (as adapted by Michael
Friendlv):
Clarity Ability to visually distinguish all the elements of a graph. Are the
most important elements or relations visually most prominent?
Necessity The need for the graph, and the graphical elements. Is the graph
a more useful way to represent the data than alternatives (table, text)?
Are all the graph elements necessary to convey the relations?
Graphical excellence is that which gives to the viewer the greatest num-
ber of ideas in the shortest time with the least ink in the smallest space.
Most data sets can be representedas a table, where each row is an object and
each column is an attribute. In many cases,it is also possible to view the data
as a multidimensional array. We illustrate this approach by representing the
Iris data set as a multidimensional array.
Table 3.7 was created by discretizing the petal length and petal width
attributes to have values of low, med'ium, and hi,gh and then counting the
number of flowers from the Iris data set that have particular combinations
of petal width, petal length, and species type. (For petal width, the cat-
egories low, med'ium, and hi,gh correspond to the intervals [0, 0.75), [0.75,
1.75), [7.75, oo), respectively. For petal length, the categories low, med'ium,
and hi,gh correspond to the intervals 10, 2.5), 12.5,5), [5, m), respectively.)
I32 Chapter 3 Exploring Data
offlowers
Table3.7. Number a particular
having of petalwidth,petallength,
combination andspecies
rype.
Petal Length Petal Width SpeciesT Count
Iov,r low Setosa 40
low medium Setosa 2
medium low Setosa 2
medium medium Versicolour +L)
Petal
widrh
Virginica
Versicolour
Setosa
high
medium
low
Petal -c E =
.9 .= o
width -c E
o
E
Figure3.30.A multidimensional
datarepresentation
forthelrisdataset.
3.4 OLAP and Multidimensional Data Analysis 133
Table3.8.Crosstabulationofflowers
accord- offlowers
Table3.9,Cross-tabulation accord-
ingto petallengthandwidthforflowers
ofthe ingto petallengthandwidthforflowers
ofthe
Setosa species. Versicolour
species.
width width
Iow medium high
+l
low 000
b0 medium 0433
et
j high 022
Table3.10. Cross-tabulation
of flowersac-
cording
to petal
lengthandwidthforflowers
of
theVirginica
species.
width
Table
3.11,Sales ofproducts
revenue (indollars)
forvarious
locations
andtimes.
Product ID Location Date Revenue
:: ii
1 Minneapolis Oct. 18,2004 $250
1 Chicago Oct. 18,2004 $79
v{"'
ProductlD
Figure
3.31.Multidimensional
datarepresentation
forsales
data.
3.4 OLAP and MultidimensionalData Analvsis L37
Table thatresult
3.12.Totals fromsumming timeandproduct.
fora fixed
overalllocations
date
JanI.2O04 Jan2,2004
tr
3.12withmarginaltotals.
3.13.Table
Table
date
Jan 1. 2004 Jan2,2004 Dec 31. 2004 | total
27 $3.800.020
tr
27,362 ,r27
are three sets of totals that result from summing over only one dimension and
each set of totals can be displayed as a two-dimensional table.
If we sum over two dimensions (perhaps starting with one of the arrays
of totals obtained by summing over one dimension), then we will obtain a
multidimensional array of totals with rz - 2 dimensions. There will be (!)
g
distinct anays of such totals. For the sales examples, there will be () :
arays of totals that result from summing over location and product, Iocation
and time, or product and time. In general, summing over ,k dimensions yields
([) arrays of totals, each with dimension n - k.
A multidimensional representation of the data, together with all possible
totals (aggregates),is known as a data cube. Despite the name, the size of
each dimension-the number of attribute values-does not need to be equal.
AIso, a data cube may have either more or fewer than three dimensions. More
importantly, a data cube is a generalization of what is known in statistical
terminology as a cross-tabulation. If marginal totals were added, Tables
3.8, 3.9, or 3.10 would be typical examples of cross tabulations.
138 Chapter 3 Exploring Data
[99], SPSS [102], R [96], and S-PLUS [98]), and mathematics software (MAT-
LAB [94] and Mathematica [93]). Most of the graphics in this chapter were
generated using MATLAB. The statistics package R is freely available as an
open source software packagefrom the R project.
The literature on visualization is extensive, covering many fields and many
decades. One of the classicsof the field is the book by Tufte [103]. The book
by Spence [tOt], which strongly influenced the visualization portion of this
chapter, is a useful referencefor information visualization-both principles and
techniques. This book also provides a thorough discussion of many dynamic
visualization techniques that were not covered in this chapter. Two other
books on visualization that may also be of interest are those by Card et al.
[87] and Fayyad et al. [S9].
Finally, there is a great deal of information available about data visualiza-
tion on the World Wide Web. Since Web sites come and go frequently, the best
strategy is a search using "information visualization," "data visualization," or
"statistical graphics." However, we do want to single out for attention "The
Gallery of Data Visualization," by Friendly [90]. The ACCENT Principles for
effective graphical display as stated in this chapter can be found there, or as
originally presented in the article by Burn [86].
There are a variety of graphical techniques that can be used to explore
whether the distribution of the data is Gaussian or some other specified dis-
tribution. Also, there are plots that display whether the observed values are
statistically significant in some sense. We have not covered any of these tech-
niques here and refer the reader to the previously mentioned statistical and
mathematical packages.
Multidimensional analysis has been around in a variety of forms for some
time. One of the original papers was a white paper by Codd [88], the father
of relational databases. The data cube was introduced by Gray et al. [91],
who described various operations for creating and manipulating data cubes
within a relational database framework. A comparison of statistical databases
and OLAP is given by Shoshani [100]. Specific information on OLAP can
be found in documentation from database vendors and many popular books.
Many database textbooks also have general discussionsof OLAP, often in the
context of data warehousing. For example, see the text by Ramakrishnan and
Gehrke [97].
Bibliography
[86] D. A. Burn. Designing Effective Statistical Graphs. In C. R. Rao, editor, Hand,bookof
Stati,stics 9. Elsevier/North-Holland, Amsterdam, The Netherlands, September 1993.
3.6 Exercises L4L
,in Informat'ion
[87] S. K. Card, J. D. MacKinlay, and B. Shneiderman, editors. Read,ings
Visualization: Using Vision to Thznlc. Morgan Kaufmann Publishers, San Francisco,
CA, January 1999.
[88] E. F. Codd, S. B. Codd, and C. T. Smalley. Providing OLAP (On-line Analytical
Processing) to User- Analysts: An IT Mandate. White Paper, E.F. Codd and Associates,
1993.
f89] U. M. Fayyad, G. G. Grinstein, and A. Wierse, editors. Information V'isualization'in
Data Mining and, Knowled,ge Discouery. Morgan Kaufmann Publishers, San FYancisco,
CA, September 2001.
[90] M. F]iendly. Gallery of Data Visualization. http://www.math.yorku.ca/SCS/Gallery/,
2005.
[91] J. Gray, S. Chaudhuri, A. Bosworth, A. Layman, D. Reichart, M. Venkatrao, F. Pellow,
and H. Pirahesh. Data Cube: A Relational Aggregation Operator Generalizing Group-
By, Cross-Tab, and Sub-Totals. Journal Data Mining and Knouledge Discouerg, l(I):
29-53, 1997.
f92] B. W. Lindgren. Stat'istical Theory. CRC Press, January 1993.
[93] Mathematica 5.1. Wolfram Research, Inc. http://www.wolfram.comf ,2005.
[94] MATLAB 7.0. The MathWorks, Inc. http://www.mathworks.com, 2005.
[95] Microsoft Excel 2003. Microsoft, Inc. http://www.microsoft.comf ,2003.
[96] R: A language and environment for statistical computing and graphics. The R Project
for Statistical Computing. http: / /www.r-project.org/, 2005.
[97] R Ramakrishnan and J. Gehrke. Database Management Systems. McGraw-Hill, 3rd
edition, August 2002.
198] S-PLUS. Insightful Corporation. http: //www.insightful.com, 2005.
[99] SAS: Statistical Analysis System. SAS Institute Inc. http:f f www.sas.com/, 2005.
[100] A. Shoshani. OLAP and statistical databases: similarities and differences. In Proc.
of the Siateenth ACM SIGACT-SIGMOD-SIGART Symp. on Princ,i,ples of Database
Sgstems, pages 185-196. ACM Press, 1997.
[101] R. Spence. Inforrnation Visual'izati,on ACM Press, New York, December 2000.
[102] SPSS: Statistical Package for the Social Sciences.SPSS, lnc. http://www.spss.com/,
2005.
f103] E. R. Tufte. The Visual Di.splag of Quantitatiue Informat'ion. Graphics Press, Cheshire,
CT, March 1986.
[104] J. W. T\rkey. Exploratory data analys,is. Addison-Wesley, 1977.
[105] P. Velleman and D. Hoaglin. The ABC's of EDA: Applications, Basics, and Computing
of Exploratorg Data Analysis. Duxbury, 1981.
3.6 Exercises
1. Obtain one of the data sets available at the UCI Machine Learning Repository
and apply as many of the different visualization techniques described in the
chapter as possible. The bibliographic notes and book Web site provide pointers
to visualization software.
L42 Chapter 3 Exploring Data
2. Identify at least two advantages and two disadvantages ofusing color to visually
represent information.
3. What are the arrangement issues that arise with respect to three-dimensional
plots?
A Discuss the advantages and disadvantages of using sampling to reduce the num-
ber of data objects that need to be displayed. Would simple random sampling
(without replacement) be a good approach to sampling? Why or why not?
E
d. Describe how you would create visualizations to display information that de-
scribes the following types of systems.
(a) Computer networks. Be sure to include both the static aspects of the
network, such as connectivity, and the dynamic aspects, such as traffic.
(b) The distribution of specific plant and animal species around the world for
a specific moment in time.
(c) The use of computer resources, such as processor time, main memory, and
disk, for a set of benchmark database programs.
(d) The change in occupation of workers in a particular country over the last
thirty years. Assume that you have yearly information about each person
that also includes gender and level of education.
6 . Describe one advantage and one disadvantage of a stem and leaf plot with
respect to a standard histogram.
7. How might you address the problem that a histogram depends on the number
and location of the bins?
8 . Describe how a box plot can give information about whether the value of an
attribute is symmetrically distributed. What can you say about the symmetry
of the distributions of the attributes shown in Fieure 3.11?
9. Compare sepal length, sepal width, petal length, and petal width, using Figure
3.r2.
3.6 Exercises L43
10. Comment on the use of a box plot to explore a data set with four attributes:
age, weight, height, and income.
1 1 . Give a possible explanation as to why most of the values of petal length and
width fall in the buckets along the diagonal in Figure 3.9.
1 2 . Use Figures 3.14 and 3.15 to identify a characteristic shared by the petal width
and petal length attributes.
13. Simple line plots, such as that displayed in Figure 2.L2 on page 56, which
shows two time series, can be used to effectively display high-dimensional data.
For example, in Figure 2.72 iI is easy to tell that the frequencies of the two
time series are different. What characteristic of time series allows the effective
visualization of high-dimensional data?
r4. Describe the typesof situations that produce sparse or dense data cubes. Illus-
trate with examples other than those used in the book.
1 5 . How might you extend the notion of multidimensional data analysis so that the
target variable is a qualitative variable? In other words, what sorts of summary
statistics or data visualizations would be of interest?
16. Construct a data cube from Table 3.14. Is this a dense or sparse data cube? If
it is sparse, identify the cells that empty.
3.14.Fact
Table table 16.
forExercise
Product ID Location ID Number Sold
I I 10
1 3 6
2 1 5
2 2 22
ofgalaxies.
Figure4.1.Classification arefromtheNASA
Theimages website.
L46 Chapter 4 Classification
Input Output
Classilication
onr.'l1,,"
set l---) model
---) Class label
Figure4.2. Classification
asthetaskof mapping setx intoitsclasslabelg.
aninputattribute
4.L Preliminaries
The input data for a classification task is a collection of records. Each record,
also known as an instance or example, is characterizedby a tuple (*,A), where
x is the attribute set and gris a special attribute, designated as the class label
(also known as category or target attribute). Table 4.1 showsa sample data set
used for classifying vertebrates into one of the following categories: mammal,
bird, fish, reptile, or amphibian. The attribute set includes properties of a
vertebrate such as its body temperature, skin cover, method of reproduction,
ability to fly, and ability to live in water. Although the attributes presented
in Table 4.L are mostly discrete, the attribute set can also contain continuous
features. The class label, on the other hand, must be a discrete attribute.
This is a key characteristic that distinguishes classification from regression,
a predictive modeling task in which g is a continuous attribute. Regression
techniques are covered in Appendix D.
Definition 4.1 (Classification). Classification is the task of learning a tar-
get function / that maps each attribute set x to one of the predefined class
Iabels g.
The target function is also known informally as a classification model.
A classification model is useful for the following purposes.
Table
4.1.Thevertebrate
dataset.
Name lJody Skin Gives Aquatic Aerral Has H be Ulass
Temperature Cover Birth Creature Creature Legs nates Label
human warm-blooded hair yes no no yes no mammal
python cold-blooded scales no no no no yes reptile
salmon cold-blooded scales no yes no no no fish
whale warm-blooded hair yes yes no no no mammal
frog cold-blooded none no seml no yes yes amphibian
komodo cold-blooded scales no no no yes no reptile
dragon
bat warm-blooded hair yes no yes yes yes mammal
plgeon warm-blooded feathers no no yes yes no bird
cat warm-blooded fur yes no no yes no mammal
leopard cold-blooded scales yes yes no no no fish
shark
turtle cold-blooded scales no semt no yes no reptile
penguln warm-blooded feathers no seml no yes no bird
porcuptne warm-blooded quills yes no no yes yes mammal
eel cold-blooded scales no yes no no no fish
salamander cold-blooded none no semr no yes yes amphibian
summarizes the data shown in Table 4.1 and explains what features define a
vertebrate as a mammal, reptile, bird, fish, or amphibian.
We can use a classification model built from the data set shown in Table 4.1
to determine the class to which the creature belongs.
turn, is a subclassof mammals) are also ignored. The remainder of this chapter
focusesonly on binary or nominal class labels.
t ;t'' I
ruooel
I I
K",':
Figure4.3. General
approach
forbuilding model.
a classification
4.2 General Approach to Solving a Classification Problem L4g
Table4.2. Confusion
matrix problem.
fora 2-class
Predicted Class
C l a s s: 7 Ulass: U
Actual Class : I T
.t 77 JIO
Class U l a s s: 0 ./ 01 .Ioo
Most classification algorithms seek models that attain the highest accuracy, or
equivalently, the lowest error rate when applied to the test set. We will revisit
the topic of model evaluation in Section 4.5.
1-50 Chapter 4 Classification
o A root node that has no incoming edges and zero or more outgoing
edges.
o Internal nodes, each of which has exactly one incoming edge and two
or more outgoing edges.
o Leaf or terminal nodes, each of which has exactly one incoming edge
and no outgoing edges.
In a decision tree, each leaf node is assigned a class label. The non-
terminal nodes, which include the root and other internal nodes, contain
attribute test conditions to separate records that have different characteris-
tics. For example, the root node shown in Figure 4.4 uses the attribute Body
4.3 Decision TheeInduction 151
Figure4.4. A decision
treeforthemammal problem.
classification
Non-
mammats
4.5.Classifying
Figure anunlabeled lines
Thedashed
vertebrate. represent ofapplying
theoutcomes
various
attribute
testconditions The
vertebrate.
ontheunlabeled iseventually
vertebrate to
assigned
lIg lrf6a-aammalclass.
timum decisions about which attribute to use for partitioning the data. One
such algorithm is fluntts algorithm, which is the basis of many existing de-
cision tree induction algorithms, including ID3, C4.5, and CART. This section
presentsa high-level discussionof Hunt's algorithm and illustrates some of its
design issues.
flunt's Algorithm
Step 1: If ail the records in Dt belong to the same class y1, then t is a leaf
node labeled as y.
Step 2: If D; contains records that belong to more than one class, an at-
tribute test condition is selectedto partition the records into smaller
subsets. A child node is created for each outcome of the test condi-
tion and the records in D1 are distributed to the children based on the
outcomes. The algorithm is then recursively applied to each child node.
4.3 DecisionTYeeInduction 1-53
a.""
"""""""J"
Yes 125K
No 100K
No 70K
Yes 120K
No 95K
No 60K
Yes 220K
No 85K
No 75K
No 90K
Figure4.6.Training
setforpredicting
borrowers onloanpayments.
whowilldefault
Figure algorithm
4.7.Hunt's forinducing trees.
decision
A learning algorithm for inducing decision trees must address the following
two issues.
Warm- Cold-
blooded blooded
Figure4,8.Testcondition
forbinaryattributes.
156 Chapter 4 Classification
Single Divorced
(a) Multiwaysplit
OR
fornominal
Figure4.9.Testconditions attributes.
Nominal Attributes Since a nominal attribute can have many values, its
test condition can be expressed in two ways, as shown in Figure 4.9. For
a multiway split (Figure 4.9(a)), the number of outcomes depends on the
number of distinct values for the corresponding attribute. For example, if
an attribute such as marital status has three distinct values-single, married,
or divorced-its test condition will produce a three-way split. On the other
hand, some decisiontree algorithms, such as CART, produce only binary splits
by considering all 2k-1 - 1 ways of creating a binary partition of k attribute
values. Figure 4.9(b) illustrates three different ways of grouping the attribute
values for marital status into two subsets.
Figure4.10.Different
waysof grouping
ordinal values.
attribute
the same partition while Mediun and Extra Large are combined into another
partition.
Figure4.11. Testcondition
forcontinuous
attributes.
158 Chapter 4 Classification
4.12.Multiway
Figure binary
versus splits.
c-l
Entropy(t) : -ln?.lt)rog2pllt), (4.3)
o1_,
Gini(r) : L -|,lp11t)12, (4.4)
i,:o
Classificationerror(t) : 1 -max[p(ilt)]' (4.5)
Entropy
0.7
eini -.'-'
-'-).' )':-'l'l- ... --.
..' -'\.\ -'aa
o.4 -aat .a,t -aa
-'a.
-at -.a' '\,
,' .'t
\'\.\
0.1
Figure4.13.Comparison
among
theimpurity
measures
forbinary problems.
classification
The preceding examples, along with Figure 4.13, illustrate the consistency
among different impurity measures. Based on these calculations, node lft has
the lowest impurity value, followed by l{z and lf3. Despite their consistency,
the attribute chosen as the test condition may vary depending on the choice
of impurity measure, as will be shown in Exercise 3 on page 198.
To determine how well a test condition performs, we need to compare the
degree of impurity of the parent node (before splitting) with the degree of
impurity of the child nodes (after splitting). The larger their difference, the
better the test condition. The gain, A, is a criterion that can be used to
determine the goodnessof a split:
k
-i
A:I(pare"t) (4.6)
j:t
Yttr),
where 1(.) is the impurity measure of a given node, ly' is the total number of
records at the parent node, k is the number of attribute values, and l/(u3)
is the number of records associated with the child node, u7. Decision tree
induction algorithms often choose a test condition that maximizes the gain
A. Since /(parent) is the same for all test conditions, maximizing the gain is
equivalent to minimizing the weighted average impurity measures of the child
nodes. Finally, when entropy is used as the impurity measurein Equation 4.6,
the difference in entropy is known as the information gain, 461o.
Consider the diagram shown in Figure 4.14. Suppose there are two ways to
split the data into smaller subsets. Before splitting, the Gini index is 0.5 since
there are an equal number of records from both classes.If attribute A is chosen
to split the data, the Gini index for node N1 is 0.4898, and for node N2, it
is 0.480. The weighted averageof the Gini index for the descendentnodes is
(71L2) x 0.4898+ (5112) x 0.480 : 0.486. Similarly, we can show that the
weighted averageof the Gini index for attribute B is 0.375. Since the subsets
for attribute B have a smaller Gini index, it is preferred over attribute A.
Figure
4.14.Splitting
binary
attributes.
{Sports,
6''A
--:-sl,- 6;)
{Family,
tuxut).,/ \Tmrry) nxuryl.z \<{orts}
Figure4.15.Splitting
nominal
attributes.
Luxury) is 0.4922 and the Gini index of {Fa:rily} is 0.3750. The weighted
average Gini index for the grouping is equal to
Figure4.16.Splitting attributes.
continuous
For the multiway split, the Gini index is computed for every attribute value.
Since Gini({raniry}) : 0.375, Gini({Sports}) : 0, and Gini({Luxury}) :
0.219, the overall Gini index for the multiway split is equal to
The multiway split has a smaller Gini index compared to both two-way splits.
This result is not surprising becausethe two-way split actually merges some
of the outcomes of a multiway split, and thus, results in less pure subsets.
Consider the example shown in Figure 4.16, in which the test condition Annual
Income ( u is used to split the training records for the loan default classifica-
tion problem. A brute-force method for finding u is to consider every value of
the attribute in the ly' records as a candidate split position. For each candidate
u, the data set is scanned once to count the number of records with annual
income less than or greater than u. We then compute the Gini index for each
candidate and choose the one that gives the lowest value. This approach is
computationally expensive because it requires O(,nf) operations to compute
the Gini index at each candidate split position. Since there are -l[ candidates,
the overall complexity of this task is O(N\. To reduce the complexity, the
training records are sorted based on their annual income, a computation that
requires O(,n/logli) time. Candidate split positions are identified by taking
the midpoints between two adjacent sorted values: 55, 65, 72, and so on. How-
ever, unlike the brute-force approach, we do not have to examine all ly' records
when evaluating the Gini index of a candidate split position.
For the first candidate. u : 55. none of the records has annual income less
than $55K. As a result, the Gini index for the descendentnode with Annual
4.3 Decision Tlee Induction 163
fncome < $55K is zero. On the other hand, the number of records with annual
income greater than or equal to $55K is 3 (for class Yes) and 7 (for class No),
respectively. Thus, the Gini index for this node is 0.420. The overall Gini
index for this candidate split position is equal to 0 x 0 + 1 x 0.420 :0.420.
For the second candidate. u : 65. we can determine its class distribution
by updating the distribution of the previous candidate. More specifically,the
new distribution is obtained by examining the class label of the record with
the lowest annual income (i.e., $60K). Since the class label for this record is
No, the count for classNo is increasedfrom 0 to 1 (for Annual Income < $65K)
and is decreasedfrom 7 to 6 (for Annual- Incone > $65K). The distribution
for class Yes remains unchanged. The new weighted-average Gini index for
this candidate split position is 0.400.
This procedure is repeated until the Gini index values for all candidates are
computed, as shown in Figure 4.16. The best split position correspondsto the
one that producesthe smallest Gini index, i.e., u:97. This procedureis less
expensive because it requires a constant amount of time to update the class
distribution at each candidate split position. It can be further optimized by
consideringonly candidate split positions located betweentwo adjacent records
with different class labels. For example, because the first three sorted records
(with annual incomes $60K, $70K, and $75K) have identical class labels, the
best split position should not reside between $60K and $75K. Therefore, the
candidatesplit positionsat a : $55K, $65K, $72K, $87K, $92K, $110K, $I22K,
$772K, and $230K are ignored becausethey are located between two adjacent
records with the same class labels. This approach allows us to reduce the
number of candidate split positions from 11 to 2.
Gain Ratio
Impurity measuressuch as entropy and Gini index tend to favor attributes that
have a large number of distinct values. Figure 4.12 shows three alternative
test conditions for partitioning the data set given in Exercise 2 on page 198.
Comparing the first test condition, Gender, with the second, Car Type, it
is easy to see that Car Type seems to provide a better way of splitting the
data since it produces purer descendentnodes. However, if we compare both
conditions with Customer ID, the latter appears to produce purer partitions.
Yet Custoner ID is not a predictive attribute becauseits value is unique for
each record. Even in a lessextreme situation, a test condition that results in a
large number of outcomes may not be desirable becausethe number of records
associated with each partition is too small to enable us to make anv reliable
predictions.
L64 Chapter 4 Classification
There are two strategies for overcoming this problem. The first strategy is
to restrict the test conditions to binary splits only. This strategy is employed
by decision tree algorithms such as CART. Another strategy is to modify the
splitting criterion to take into account the number of outcomes produced by
the attribute test condition. For example, in the C4.5 decision tree algorithm,
a splitting criterion known as gain ratio is used to deterrnine the goodness
of a split. This criterion is defined as follows:
Ai"fo
Ualn "
ratlo : (4.7)
;--;.,--*-.
5pt1t rnlo
tree (Steps 11 and 12) until the stopping criterion is met (Step 1). The details
of this algorithm are explained below:
where the argmax operator returns the argument i that maximizes the
expressionp(i,lt). Besidesproviding the information neededto determine
the class label of a leaf node, the fraction p(i,lt) can also be used to es-
timate the probability that a record assignedto the leaf node t belongs
to class z. Sections 5.7.2 and 5.7.3 describe how such probability esti
mates can be used to determine the oerformance of a decisiontree under
different cost functions.
Totalnumberof pagesretrieved
in a Websession
umn.edu/-kumar
htto://www.cs. Totalnumberof imaqepaqesretrievedin a Websession
Totalamountol timesDentbv Websitevisitor
morethanoncein a Websession
madeusingHEADmethod
of requests
MlNDS/MlNDS_papers.htm
4.17.InputdataforWebrobotdetection.
Figure
Web usage mining is the task of applying data mining techniques to extract
useful patterns from Web accesslogs. These patterns can reveal interesting
characteristicsof site visitors; e.g., people who repeatedly visit a Web site and
view the same product description page are more likely to buy the product if
certain incentives such as rebates or free shipping are offered.
In Web usage mining, it is important to distinguish accessesmade by hu-
man users from those due to Web robots. A Web robot (also known as a Web
crawler) is a software program that automatically locates and retrieves infor-
mation from the Internet by following the hyperlinks embeddedin Web pages.
These programs are deployedby searchengine portals to gather the documents
necessaryfor indexing the Web. Web robot accessesmust be discarded before
applying Web mining techniques to analyze human browsing behavior.
4.3 Decision Tree Induction 167
2 . Unlike human users, Web robots seldom retrieve the image pages asso-
ciated with a Web document.
DecisionTree:
depth= 1:
breadth>7: class 1
breadth<=7:
breadth<= 3:
lmagePages>0.375: class 0
lmagePages<=0.375:
totalPages<=6: class 1
totalPages>6:
I breadth<= 1: class 1
I breadth> 1: class 0
width > 3:
MultilP= 0:
lmagePages<=0.1333: class 1
lmagePages>0.1333:
breadth<= 6: class 0
breadth> 6: class 1
MultilP= 1:
I TotalTime<= 361: class 0
I TotalTime> 361: class 1
depth>1:
MultiAgent= 0:
deoth> 2: class 0
deoth< 2:
MultilP= 1: class 0
MultilP= 0:
breadth<= 6: class 0
breadth> 6:
I RepeatedAccess<= 0.322: class 0
I RepeatedAccess> 0.322: class 1
MultiAgent= 1:
I totalPages<= 81: class 0
I totalPages> 81: class 1
4.18.Decision
Figure treemodelfor
Webrobot
detection,
4. Web robots are more likely to make repeated requestsfor the same doc-
ument since the Web pages retrieved by human users are often cached
by the browser.
6 . Decision tree algorithms are quite robust to the presenceof noise, espe-
cially when methods for avoiding overfitting, as described in Section 4.4,
are employed.
7 . The presence of redundant attributes does not adversely affect the ac-
curacy of decision trees. An attribute is redundant if it is strongly cor-
related with another attribute in the data. One of the two redundant
attributes will not be used for splitting once the other attribute has been
chosen. However, if the data set contains many irrelevant attributes, i.e.,
attributes that are not useful for the classificationtask, then some of the
irrelevant attributes may be accidently chosen during the tree-growing
process, which results in a decision tree that is larger than necessary.
Feature selection techniques can help to improve the accuracy of deci-
sion trees by eliminating the irrelevant attributes during preprocessing.
We will investigate the issue of too many irrelevant attributes in Section
4.4.3.
170 Chapter4 Classification
problem.
4.19.Treereplication
Figure Thesame canappear
subtree atdifferent
branches.
10. The test conditions described so far in this chapter involve using only a
single attribute at a time. As a consequence,the tree-growing procedure
can be viewed as the process of partitioning the attribute space into
disjoint regions until each region contains records of the same class (see
Figure 4.20). The border between two neighboring regions of different
classesis known as a decision boundary. Since the test condition in-
volves only a single attribute, the decision boundaries are rectilinear; i.e.,
parallel to the "coordinate axes." This limits the expressivenessof the
4.3 Decision Tree Induction L7L
1
0.9 oiv
0.8 V
0.7 oi
i
0.6 al
V
> 0.5
vi
0.4 V' i v
-----------d----
0.3
0.2 vi
0.1 vio
0r ,
0 0.1 0.3 0.4 0.5 0.6 0.7 0.8 0.9
x
Figure4.20.Example treeanditsdecision
ofa decision fora two-dimensional
boundaries dataset.
'I
i: .'l
.::... ..' '.' ' t
t
0.9
;\ ..1.'- ' t:.tt..t'
ata atat. 3
0.8 t
.t'
:rt
o.7 'ao-t
a
t. .
.t "t
u,o a
jltt'
r- a oo a a
to '
o.4
0
.l
o.2 + + ++ \ .
0.1 ..i..1;j- ;- \
oot
0.1 o.2 0.3 0.4 0.6 o.7 0.9
of datasetthatcannotbepartitioned
Figure4.21. Example involving
usingtestconditions
optimally
single
attributes.
r+A<I.
Although such techniques are more expressive and can produce more
compact trees, finding the optimal test condition for a given node can
be computationally expensive.
Constructive induction provides another way to partition the data
into homogeneous)nonrectangular regions (seeSection 2.3.5 on page 57).
This approach creates composite attributes representing an arithmetic
or logical combination of the existing attributes. The new attributes
provide a better discrimination of the classesand are augmented to the
data set prior to decisiontree induction. Unlike the oblique decision tree
approach, constructive induction is less expensive because it identifies all
the relevant combinations of attributes once, prior to constructing the
decision tree. In contrast, an oblique decision tree must determine the
right attribute combination dynamically, every time an internal node is
expanded. However, constructive induction can introduce attribute re-
dundancy in the data since the new attribute is a combination of several
existing attributes.
11. Studies have shown that the choice of impurity measure has little effect
on the performance of decisiontree induction algorithms. This is because
many impurity measuresare quite consistent with each other, as shown
in Figure 4.13 on page 159. Indeed, the strategy used to prune the
tree has a greater impact on the final tree than the choice of impurity
measure.
Trainingsel
16
,{.}tll.*fi :rliT"f,-:-g:'i;
14
-
(\l
't2
10
t .. *'-+ *
i **-
*;.**-*3S**o*l 1 * ++ +t*+
* ,** * + * * * +
* qoF oL-Y o
.^ r^**l * ** *
8
I .dr;
b
2 **** t
*l *aoi**3o-..*ia:..-i.^oo.i
0E
0 10 12 16 18
x1
Figure4.22.Example classes.
of a datasetwithbinary
seen before. In other words, a good model must have low training error as
well as low generalization error. This is important becausea model that fits
the training data too well can have a poorer generalization error than a model
with a higher training error. Such a situation is known as model overfitting.
o.4
- TrainingError
0.35 .- - TestError
0.25
o
G
r
h 0.2
Y
ul
0.1s
0.1
0.05
Figure4.23.Training
andtesterrorrates.
Notice that the training and test error rates of the model are large when the
size of the tree is very small. This situation is known as model underfitting.
Underfitting occurs becausethe model has yet to learn the true structure of
the data. As a result, it performs poorly on both the training and the test
sets. As the number of nodes in the decision tree increases,the tree will have
fewer training and test enors. However, once the tree becomes too large, its
test error rate begins to increase even though its training error rate continues
to decrease.This phenomenon is known as model overfitting.
To understand the overfitting phenomenon, note that the training error of
a model can be reduced by increasing the model complexity. For example, the
Ieaf nodes of the tree can be expanded until it perfectly fits the training data.
Although the training error for such a complex tree is zero, the test error can
be large because the tree may contain nodes that accidently fit some of the
noise points in the training data. Such nodes can degrade the performance
of the tree becausethey do not generalize well to the test examples. Figure
4.24 shows the structure of two decision trees with different number of nodes.
The tree that contains the smaller number of nodes has a higher training error
rate, but a lower test error rate compared to the more complex tree.
Overfitting and underfitting are two pathologies that are related to the
model complexity. The remainder of this section examines some of the poten-
tial causes of model overfitting.
4.4 Model Overfitting L75
2n
+
(a) Decision tree with 11 leaf (b) Decision tree with 24 leaf nodes.
nodes.
Figure
4.24.Decision withditferent
trees model
complexities.
Table4.3. Anexampletraining
setforclassifying Classlabels
mammals. withasterisk repre-
symbols
sentmislabeled
records.
Table4.4.Anexample
testsetforclassifying
mammals.
Name tsody Gl ves Four- Hibernates UIaSS
Temperature Birth legged Label
human warm-blooded yes no no yes
plgeon warm-blooded no no no no
elephant warm-blooded yes yes no yes
leopard shark coid-blooded yes no no no
turtle cold-blooded no yes no no
penguin cold-blooded no no no no
eel cold-blooded no no no no
dolphin warm-blooded yes no no yes
spiny anteater warm-blooded no yes yes yes
gila monster cold-blooded no yes yes no
Figure4.25.Decision
treeinduced
fromthedatasetshowninTable
4.3.
the test set is 30%. Both humans and dolphins were misclassified as non-
mammals becausetheir attribute values for Body Tenperature, Gives Birth,
and Four-legged are identical to the mislabeled records in the training set.
Spiny anteaters, on the other hand, represent an exceptional casein which the
class label of a test record contradicts the class labels of other similar records
in the training set. Errors due to exceptional casesare often unavoidable and
establish the minimum error rate achievable bv anv classifier.
4.4 Model Overfitting L77
In contrast, the decision tree M2 shown in Figure 4.25(b) has a lower test
error rate (I0%) even though its training error rate is somewhat higher (20%).
It is evident that the first decision tree, MI, has overfitted the training data
becausethere is a simpler model with lower error rate on the test set. The
Four-legged attribute test condition in model MI is spurious becauseit fits
the mislabeled training records, which leads to the misclassificationof records
in the test set.
Table4.5.Anexample
training
setforclassifying
mammals.
Name Body Lt lves Four- Hibernates Class
Temperature Birth Iegged Label
salamander cold- blooded no yes yes no
guppy cold-blooded yes no no no
eagle warm-blooded no no no no
poorwill warm-blooded no no yes no
platypus warm-blooded no yes Yes yes
Figure4.26.Decision
treeinduced
fromthedatasetshowninTable
4.5.
which is very high. Although each analyst has a low probability of predicting
at least eight times correctly, putting them together, we have a high probability
of finding an analyst who can do so. Furthermore, there is no guarantee in the
4.4 Model Overfitting LTg
future that such an analyst will continue to make accurate predictions through
random guessing.
How does the multiple comparison procedure relate to model overfitting?
Many learning algorithms explore a set of independent alternatives, {7a}, and
then choose an alternative, 'y*ur., that maximizes a given criterion function.
The algorithm will add 7*ur" to the current model in order to improve its
overall performance. This procedure is repeated until no further improvement
is observed. As an example, during decision tree growing, multiple tests are
performed to determine which attribute can best split the training data. The
attribute that leads to the best split is chosen to extend the tree as long as
the observed improvement is statistically significant.
Let ?o be the initial decision tree and Trbe the new tree after inserting an
internal node for attribute r. In principle, r can be added to the tree if the
observedgain, A(?s, T*), is greater than some predefined threshold a. If there
is only one attribute test condition to be evaluated, then we can avoid inserting
spurious nodes by choosing a Iarge enough value of a. However, in practice,
more than one test condition is available and the decision tree algorithm must
choosethe best attribute r,,,u* from a set of candidates,{*r,*2,. ..,rp}, to
partition the data. In this situation, the algorithm is actually using a multiple
comparison procedure to decide whether a decision tree should be extended.
More specifically, it is testing for A(?s, T*^u*) > a instead of A(?0, T") > o.
As the number of alternatives, k, increases, so does our chance of finding
A(fo, T*^u*) ) a. Unless the gain function A or threshold a is modified to
account for k, the algorithm may inadvertently add spurious nodes to the
model, which leads to model overfitting.
This effect becomesmore pronounced when the number of training records
from which z-ur. is chosenis small, becausethe variance of A(Tg, Tr^u*) is high
when fewer examples are available for training. As a result, the probability of
finding A(fo, Tr^u*) ) c increaseswhen there are very few training records.
This often happens when the decisiontree grows deeper, which in turn reduces
the number of records covered by the nodes and increases the likelihood of
adding unnecessarynodes into the tree. Failure to compensate for the large
number of alternatives or the small number of training records will therefore
lead to model overfitting.
Decision
Tree,Ta DecisionTree, Ta
Figure4.27.Example treesgenerated
oftwodecision fromthesametraining
data.
4.4 Model Overfitting L81
As previously noted, the chance for model overfitting increases as the model
becomes more complex. For this reason, we should prefer simpler models, a
strategy that agreeswith a well-known principle known as Occam's razor or
the principle of parsimony:
Definition 4.2. Occam's Razor: Given two models with the same general-
ization errors, the simpler model is preferred over the more complex model.
Occam's razor is intuitive becausethe additional components in a complex
model stand a greater chanceof being fitted purely by chance. In the words of
Einstein, "Everything should be made as simple as possible, but not simpler."
Next, we present two methods for incorporating model complexity into the
evaluation of classification models.
- eQ)+ o(r)
If:, [".(t')+ cl(to)]
Y' :
e.(T\
l/,
Df:rn(tt)
where k is the number of leaf nodes, e(") is the overall training error of the
decision tree, N1 is the number of training records, and O(t6) is the penalty
term associatedwith each node l;.
Example 4.2. Consider the binary decision trees shown in Figure 4.27. If
the penalty term is equal to 0.5, then the pessimistic error estimate for the
left tree is
4-t7 x0.5 7.5
eg(Tr): : 0.3125
24 24
and the pessimistic error estimate for the right tree is
/m \ 6+4x0.5
e s \ tR ) : :
24 *,:0'3333'
182 Chapter 4 Classification
A B
.) .)
T T
Unlabeled
Figure
4.28.Theminimum (MDL)principle.
length
description
Thus, the left tree has a better pessimistic error rate than the right tree. For
binary trees, a penalty term of 0.5 means a node should always be expanded
into its two child nodes as long as it improves the classification of at least one
training record because expanding a node, which is equivalent to adding 0.5
to the overall error, is less costly than committing one training error.
If fr(r) : 1 for all the nodes t, the pessimistic error estimate for the left
tree is es(Tr) : 71124: 0.458, while the pessimisticerror estimate for the
right tree is en(Tp):10124:0.4L7. The right tree thereforehas a better
pessimistic error rate than the left tree. Thus, a node should not be expanded
into its child nodes unless it reduces the misclassification error for more than
one training record.
form before being transmitted to B. If the model is 100% accurate, then the
cost of transmission is equivalent to the cost of encoding the model. Otherwise,
A must also transmit information about which record is classified incorrectly
by the model. Thus, the overall cost of transmission is
where the first term on the right-hand side is the cost of encoding the model,
while the second term representsthe cost of encoding the mislabeled records.
According to the MDL principle, we should seek a model that minimizes the
overall cost function. An example showing how to compute the total descrip-
tion length of a decision tree is given by Exercise 9 on page 202.
_2 T: .2
,+7#a",/2\,1t1+=/
+ffi
euw.r(Nrercr): , , (4.10)
-d/2
1 -l-
rrly'
where c is the confidence \evel, zo12is the standardized value from a standard
normal distribution, and -Ay'is the total number of training records used to
compute e. By replacing a : 25To,N : 7, and e : 217, the upper bound for
the error rate is eupp.r(7,217,0.25):0.503, which correspondsto 7 x 0.503:
3.521 errors. If we expand the node into its child nodes as shown in ft,, the
training error rates for the child nodes are Lf 4: 0.250 and 1/3 : 0.333,
L84 Chapter 4 Classification
respectively. Using Equation 4.10, the upper bounds of these error rates are
eupp.,(4,714,0.25):0.537 and e,,*",(3,1f 3,0.25) : 0.650,respectively.The
overall training error of the child nodesis 4 x 0.537+3 x 0.650:4.098, which
is larger than the estimated error for the corresponding node in 76. r
In this approach, instead of using the training set to estimate the generalization
error, the original training data is divided into two smaller subsets. One of
the subsets is used for training, while the other, known as the validation set,
is used for estimating the generalization error. Typically, two-thirds of the
training set is reserved for model building, while the remaining one-third is
used for error estimation.
This approach is typically used with classification techniques that can be
parameterized to obtain models with different levels of complexity. The com-
plexity of the best model can be estimated by adjusting the parameter of the
learning algorithm (e.g., the pruning level of a decision tree) until the empir-
ical model produced by the learning algorithm attains the lowest error rate
on the validation set. Although this approach provides a better way for esti-
mating how well the model performs on previously unseen records, less data
is available for training.
lmagePages>0.375: class 0
lmagePages<=0.375:
Simplified Decision Tree:
totalPages>6:
I breadth <= 1: class 1 depth= 1: -c-tass
I breadth > 1: class 0 leds:
t iimageP-a-ges-<---o T--l
I ilmagePages> 0.1333: I
1\Lul!ilP-=-o---- I breadth <= 6: class 0
tilmagePages:fo.i$3: Gtas-sT'i I i I breadth > 6: class I
I i lmagePages>0.1333: d e D t h> 1 :
I i breadth <= 6: class 0 i ------'l
uMuiriAsaT=oaEE;to
I Lbjqaltl jbgs_l_ _ _ _ _ _ _ _l
=
M u l t i l P 1"]j-
: T tr,tuniaGn-t-=l:
I TotalTime<= 361: class 0 | | t o t a l P a g e s < = 8 1 :c l a s s 0
I TotalTime> 361: class 1 | | totalPages>81: classl
NLul!i4s_e!t_=_ot
- _ ------------|
I ideoth"iz-:
^l
c-la-ss-il
| | deplh <= 2:
I i I MultilP= 1: class 0
lrlMultilP=0:
I I | | breadth <= 6: class 0
lrl I breadth>6:
lll | | RepeatedAccess<=0.322:classo i
liLIlEgp_eet9dAc9es1>_q jq2_2i_c!_a9s_1__l
MultiAgent= 1:
I t o t a l P a g e s < = 8 1 :c l a s s 0
I totalPages> 81: class I
even if no significant gain is obtained using one of the existing attribute test
conditions, subsequentsplitting may result in better subtrees.
depth : t have been replaced by one of the branches involving the attribute
InagePages. This approach is also known as subtree raising. The depth )
1 and MultiAgent : 0 subtree has been replaced by a leaf node assignedto
class 0. This approach is known as subtree replacement. The subtree for
depth ) 1 and MultiAgent : 1 remains intact.
of each other. Because the training and test sets are subsets of the original
data, a class that is overrepresentedin one subset will be underrepresentedin
the other, and vice versa.
The holdout method can be repeated several times to improve the estimation
of a classifier'sperformance. This approach is known as random subsampling.
Let acc.i be the model accuracy during the i,th iteration. The overall accuracy
is given by acq,,6 : Ditaccif k. Random subsampling still encounters some
of the problems associatedwith the holdout method becauseit does not utilize
as much data as possiblefor training. It also has no control over the number of
times each record is used for testing and training. Consequently,some records
might be used for training more often than others.
4.5.3 Cross-Validation
An alternative to random subsampling is cross-validation. In this approach,
each record is used the same number of times for training and exactly once
for testing. To illustrate this method, suppose we partition the data into two
equal-sizedsubsets. First, we choose one of the subsets for training and the
other for testing. We then swap the roles of the subsets so that the previous
training set becomesthe test set and vice versa. This approach is called a two-
fold cross-validation. The total error is obtained by summing up the errors for
both runs. In this example, each record is used exactly once for training and
once for testing. The k-fold cross-validation method generalizesthis approach
by segmenting the data into k equal-sizedpartitions. During each run, one of
the partitions is chosenfor testing, while the rest of them are used for training.
This procedure is repeated k times so that each partition is used for testing
exactly once. Again, the total error is found by summing up the errors for
all k runs. A special case of the k-fold cross-validation method sets k : N,
the size of the data set. In this so-called leave-one-out approach, each test
set contains only one record. This approach has the advantage of utilizing
as much data as possible for training. In addition, the test sets are mutually
exclusive and they effectively cover the entire data set. The drawback of this
approach is that it is computationally expensive to repeat the procedure ly'
times. Furthermore, since each test set contains only one record, the variance
of the estimated performance metric tends to be high.
188 Chapter 4 Classification
4.5.4 Bootstrap
The methods presented so far assume that the training records are sampled
without replacement. As a result, there are no duplicate records in the training
and test sets. In the bootstrap approach, the training records are sampled
with replacement; i.e., a record already chosen for training is put back into
the original pool of records so that it is equally likely to be redrawn. If the
original data has .ly' records, it can be shown that, on average, a bootstrap
sample of size ly' contains about 63.2%of the records in the original data. This
approximation follows from the fact that the probability a record is chosen by
a bootstrap sample is 1 - (1 - 1/,^f)^i. When l,r is sufficiently large, the
probability asymptotically approaches 1 - e-r :0.632. Records that are not
included in the bootstrap sample become part of the test set. The model
induced from the training set is then applied to the test set to obtain an
estimate of the accuracy of the bootstrap sample, e.;.The sampling procedure
is then repeated b times to generate b bootstrap samples.
There are several variations to the bootstrap sampling approach in terms
of how the overall accuracy of the classifier is computed. One of the more
widely used approaches is the .632 bootstrap, which computes the overall
accuracy by combining the accuraciesof each bootstrap sample (e;) with the
accuracy computed from a training set that contains all the labeled examples
in the original data (acc"):
1-a
Accuracy, (rcc6661
: x e6+ 0.368 x acc"). (4.11)
i ),{0.U32
The preceding example raises two key questions regarding the statistical
significance of the performance metrics:
The first question relates to the issue of estimating the confidence interval of a
given model accuracy. The second question relates to the issue of testing the
statistical significanceof the observeddeviation. These issuesare investigated
in the remainder of this section.
P(X:u):
On,' - p)N-"
For example, if the coin is fair (p : 0.5) and is flipped fifty times, then the
probability that the head shows up 20 times is
If the experiment is repeated many times, then the average number of heads
expectedto show up is 50 x 0.5 : 25, while its varianceis 50 x 0.5 x 0.5 : I2.5.
190 Chapter 4 Classification
The task of predicting the class labels of test records can also be consid-
ered as a binomial experiment. Given a test set that contains l{ records, let
X be the number of records correctly predicted by a model and p be the true
accuracy of the model. By modeling the prediction task as a binomial experi-
ment, X has a binomial distribution with mean I/p and variance Np(L - p).
It can be shown that the empirical accuracy?acc : X lN , also has a binomial
distribution with mean p and variance p(t-p)lN (seeExercise 12). Although
the binomial distribution can be used to estimate the confidence interval for
acc, it is often approximated by a normal distribution when N is sufficiently
large. Based on the normal distribution, the following confidenceinterval for
acc car:'be derived:
acc-p
t(- z o t zI < zr_.p) :7 _ (r, (4.12)
where Zo12 and Zt-o/z are the upper and lower bounds obtained from a stan-
dard normal distribution at confidencelevel (1 - a). Since a standard normal
distribution is symmetric around Z:0, it follows that Zop: Zt-o/z.Rear-
ranging this inequality leads to the following confidence interval for p:
2xNxacc*Z',tr+Z*/r@
(4.13)
2(N + 22^,r) -
"t
The following table shows the values of Zo12 at different confidence levels:
Example 4.4. Consider a model that has an accuracy of 80% when evaluated
on 100 test records. What is the confidenceinterval for its true accuracy at a
95% confidencelevel? The confidencelevel of 95% correspondsto Zo12:1.96
according to the table given above. Inserting this term into Equation 4.13
yields a confidence interval between 71.1% and 86.7%. The following table
shows the confidenceinterval when the number of records, l/, increases:
Consider a pair of models, M1 and M2, that are evaluated on two independent
test sets, D1 and D2. Let n1 denote the number of records in D1 and n2 denote
the number of records in D2. In addition, suppose the error rate for M1 oL
Dl is el and the error rate for Mz on D2 is e2. Our goal is to test whether the
observed difference between e1 and e2 is statisticaliy significant.
Assuming that n1 and n2 are sufficiently large, the error rates e1 and e2
can be approximated using normal distributions. If the observed differencein
the error rate is denoted as d : er - €2t then d is also normally distributed
with mean d1,its true difference, and variance, o]. Tne variance of d can be
computed as follows:
n 2 .- - i ?" d -
e1(L-e1) , -e1 2 ( l - e 2 )
-7 (4.r4)
"d
nL n2
where - et)lu and e2(1 - ez)lnz are the variances of the error rates.
"t(I
Finally, at the (t - a)% confidencelevel, it can be shown that the confidence
interval for the true difference dl is given by the following equation:
dt:dLzo12G6. (4.15)
Example 4.5. Consider the problem described at the beginning of this sec-
tion. Model Ma has an error rate of er : 0.15 when applied to A1 : 39
test records, while model Ms has an error rate of ez : 0.25 when applied
to Ab : 5000 test records. The observed difference in their error rates is
d : 10.15- 0.251: 0.1. In this example, we are performing a two-sided test
to check whether dt : 0 or d,6I 0. The estimated variance of the observed
difference in error rates can be computed as follows:
0 . 1 5 ( 1- 0 . 1 5 )
^,
wd -
30
.qag#a:ooo43
or 64 :0.0655. Inserting this value into Equation 4.L5, we obtain the following
confidence interval for d+ at 95% confidence level:
d +: 0 . 1 + 1 . 9 6x 0 . 0 6 5 5: 0 . 1* 0 . 1 2 8 .
As the interval spans the value zero) we can conclude that the observed differ-
ence is not statistically significant at a g5% confidence level. I
L92 Chapter 4 Classification
(4.16)
dt":dItg_.'1,t_r6a.".
The coefficient t1r-a;,r-1 is obtained from a probability table with two input
parameters, its confidencelevel (1 - and the number of degreesof freedom,
k - I. The probability table for the ")
t-distribution is shown in Table 4.6.
Example 4.6. Suppose the estimated difference in the accuracy of models
generated by two classification techniques has a mean equal to 0.05 and a
standard deviation equal to 0.002. If the accuracy is estimated using a 30-fold
cross-validation approach, then at a gbToconfidence level, the true accuracy
difference is
df" :0.05 +2.04x 0.002. (4.17)
4.7 BibliographicNotes 193
Table4.6. Probability
tablefort-distribution.
(1-o
k-7 0.99 0.98 0.95 0.9 0.8
1
l- 3.08 6.31 72.7 31.8 63.7
2 1.89 2.92 4.30 6.96 9.92
A 1.53 2.L3 2.78 3.75 4.60
9 1.38 1.83 2.26 2.82 3.25
I4 7.34 r.76 2.L4 2.62 2.98
19 1.33 1 .73 2.09 2.54 2.86
24 r.32 7 . 7 r 2.06 2.49 2.80
29 1 . 3 1 7 .70 2.04 2.46 2.76
Since the confidence interval does not span the value zero, the observed dif-
ference between the techniques is statistically significant. r
primary reasons for overfitting [t35, 140], Jensen and Cohen [123] argued
that overfitting is the result of using incorrect hypothesis tests in a multiple
comparison procedure.
Schapire 1149]defined generalization error as "the probability of misclas-
sifying a new example" and test error as "the fraction of mistakes on a newly
sampled test set." Generalization error can therefore be consideredas the ex-
pected test error of a classifier. Generalization error may sometimes refer to
the true error 1136]of a model, i.e., its expected error for randomly drawn
data points from the same population distribution where the training set is
sampled. These definitions are in fact equivalent if both the training and test
sets are gathered from the same population distribution, which is often the
casein many data mining and machine learning applications.
The Occam's razor principle is often attributed to the philosopher William
of Occam. Domingos [113] cautioned against the pitfall of misinterpreting
Occam's razor as comparing models with similar training errors, instead of
generalization errors. A survey on decision tree-pruning methods to avoid
overfitting is given by Breslow and Aha [109] and Esposito et al. [116]. Some
of the typical pruning methods include reduced error pruning [144],pessimistic
error pruninglL44], minimum error pruning [141], critical value pruning 1134],
cost-complexity pruning [108], and error-based pruning [1a5]. Quinlan and
Rivest proposed using the minimum description length principle for decision
tree pruning in [146].
Kohavi [127] had performed an extensive empirical study to compare the
performance metrics obtained using different estimation methods such as ran-
dom subsampling, bootstrapping, and k-fold cross-validation. Their results
suggest that the best estimation method is based on the ten-fold stratified
cross-validation. Efron and Tibshirani [115] provided a theoretical and empir-
ical comparison between cross-validation and a bootstrap method known as
the 632* rule.
Current techniquessuch as C4.5 require that the entire training data set fit
into main memory. There has been considerable effort to develop parallel and
scalable versions of decision tree induction algorithms. Some of the proposed
algorithms include SLIQ by Mehta et al. [131], SPRINT by Shafer et al. [151],
CMP by Wang and Zaniolo [153], CLOUDS by Alsabti et al. [106],RainForest
by Gehrke et al. [119], and ScalParC by Joshi et al. j2al. A general survey
of parallel algorithms for data mining is available in 1129].
196 Chapter 4 Classification
Bibliography
[106] K. Alsabti, S. Ranka, and V. Singh. CLOUDS: A Decision TYee Classifier for Large
Datasets. In Proc. of the lth Intl. Conf. on Knowledge D'iscouery and Data M'ining,
pages 2-8, New York. NY, August 1998.
11071 C. M. Bishop. Neural Networks for Pattern Recogniti,on. Oxford University Press,
Oxford, U.K., 1995.
l1o8lL. Breiman, J. H. Friedman, R. Olshen, and C. J. Stone. Classi,fi'cati'onand Regression
Trees. Chaprnan & Hall, New York, 1984.
l1OelL. A. Breslow and D. W. Aha. Simplifying Decision Tlees: A Survey. Knowledge
Engineering Reui,ew,L2(l):L-40, 1997.
[110] W. Buntine. Learning classification trees. In Artifici,al Intelligence Frontiers i;n Statis-
lics, pages I82 20I. Chapman & Hall, London, 1993.
1111] E. Cantri-Paz and C. Kamath. Using evolutionary algorithms to induce oblique decision
trees. In Proc. of the Genetic and Euoluti,onarg Computation Conf., pages 1053-1060,
San Flancisco, CA, 2000.
[112] V. Cherkassky and F. Mulier. Learning from Data: Concepts, Theory, and Method's.
Wiley Interscience, 1998.
[113] P. Domingos. The Role of Occam's Razor in Knowledge Discovery. Data Mi'ning and
Knouledg e Discouery, 3(4) :409-425, t999.
[114] R. O. Duda, P. E. Hart, and D. G. Stork. Pattern Class'ification John Wiley & Sons,
Inc., New York, 2nd edition, 2001.
[115] B. Efron and R. Tibshirani. Cross-validation and the Bootstrap: Estimating the Error
Rate of a Prediction Rule. Technical report, Stanford University, 1995.
[116] F. Esposito, D. Malerba, and G. Semeraro. A Comparative Analysis of Methods for
Pruning Decision Trees. IEEE Trans. Pattern Analysis and Machine Intelli,gence, 19
(5):476-491,May 1997.
[117] R. A. Fisher. The use of multiple measurements in taxonomic problems. Annals of
Eugenics,7:179 188,1936.
11181K. Fukunaga. Introduct'ion to Statist'icalPattern Recognit'i,on.Academic Press, New
York, 1990.
1119] J. Gehrke, R. Ramakrishnan, and V. Ganti. RainForest-A Framework for Fast De-
cision Tree Construction of Large Datasets. Data Mi,ning and Knowledge D'iscouerg, 4
(213):127-162,2000.
[120] T. Hastie, R. Tibshirani, and J. H. Friedman. The Elements of Stati.stical Leanting:
Data Mi,ning, Inference, Pred'icti,on. Springer, New York, 2001.
f121] D. Heath, S. Kasif, and S. Salzberg. Induction of Oblique Decision Trees. In Proc. of
the 13th IntI. Joint Conf. on Arti,fici,al Intelligence, pages 1002-1007, Chambery, France,
August 1993.
ll22] A. K. Jain, R. P. W. Duin, and J. Mao. Statistical Pattern Recognition: A Review.
IEEE Tfan. Patt. Anal. and,Mach. Intellig.,22(I):4 37,2000.
f123] D. Jensen and P. R. Cohen. Multiple Comparisons in Induction Algorithms. Mach'ine
Learning, 38(3) :309-338, March 2000.
ll24l M. V. Joshi, G. Karypis, and V. Kumar. ScalParC: A New Scalable and Efficient
Parallel Classification Algorithm for Mining Large Datasets. In Proc. of 12th IntI.
Parallel Processing Sgmp. (IPPS/SPDP), pages 573 579, Orlando, FL, April 1998.
[125] G. V. Kass. An Exploratory Technique for Investigating Large Quantities of Categor-
ical Data. Appli,ed Statistics, 29:Ll9-127, 1980.
Bibtiography Lg7
[126]
lr27l
u28l
[12e]
[132] R' S' Michalski' A theory and methodologyof inductive learning. Arti,fi,cial
Intell,igence,
20:111-116,1988.
[133j D. Michie, D. J. Spiegelhalter, and C. C. Taylor. Machine Learning,
Neural and
Statistical Classi,fi,cati,oz.Ellis Horwood, Upper
. Saddle River, NJ, 1g94.
[134] -J' Mingers. Expert systems-Rule Induction with statisticar Data.
J Operational
Research Soc'ietg,Bg:89 47, 19g2.
[135] J' Mingers' An empirical comparisonof pruning methods for decisiontree
induction.
Machine Learni,ng,4:227_24J,Iggg.
[136] T. Mitchell. Machine Learn,ing.McGraw_Hill,Boston, MA, IggT.
1137]B. M. E' Moret. DecisionT}eesand Diagrams. computing suraegs, r4(4):893-62J,
t982.
I138]-S' K' Murthy. Automatic Construction of Decision TYeesfrom Data:
A
Disciplinarysurvey. Data Mi,ning and KnowredgeDiscouery,2(4):34b-3gg, Multi-
1ggg.
[139] S' K' Murthy, s. r.<3if, S.
_and salzberg. A system for induction of oblique decision
trees. ,.Iof Artificial Intelligence Research,2:I,JJ, Igg4.
[140] T' Niblett. constructing decisiontrees in noisy domains. In proc. of the
2nd.European
Working Sessionon Leaming, pages67_2g,Bled, yugoslavia,
May 19g7.
[141] T' Niblett and I. Bratko' LearningDecisionRulesin Noisy Domai ns.rn
Researchand,
Deuelopmentin Er,pert systems r11,cambridge, 1gg6.
cu-bridg" University press.
[142] K' R' Pattipati and M. G. Alexandri<Iis.Application of heuristic searchand
information
theory to sequentiarfault diagnosis. IEEE Trans. on
sgstems, Mon, ord, cgbernetics,
2o(4):872-s82,
rss}.
[143] J' R' Quinlan. Discovering rules by induction from large collection of
examples. In
D' Michie, editot, Eupert systems ,in the M,icro Erectronic
Ag". Edinburgh university
Press,Edinburgh,UK, 1g79.
Ir44) J- R' Quinlan. simplifying DecisionTtees. 1nrl. J. Man-Machinestud,ies,2T:22r_284,
1987.
[145] J. R. Quinlan. cl.s: progr.amsfor Machine Learni,ng.Morgan-Kaufmann publishers,
San Mateo, CA, 1gg3.
[146] J' R' Quinlan and R. L. Rivest. Inferring DecisionTleesusing the Minimum
Descrip-
tion Length Principle. Information and Computation,g0(B):2i7_24g,
19gg.
1-98 Chapter 4 Classification
4.8 Exercises
1 . Draw the full decision tree for the parity function of four Boolean attributes,
A, B, C, and D. Is it possible to simplify the tree?
2 . Consider the training examples shown in Table 4.7 for a binary classification
problem.
(a) Compute the Gini index for the overall collection of training examples.
(b) Compute the Gini index for the Custoner ID attribute.
(c) Compute the Gini index for the Gender attribute.
(d) Compute the Gini index for the Car Type attribute using multiway split.
(e) Compute the Gini index for the Shirt Size attribute using multiway
split.
(f) Which attribute is better, Gender, Car Type, or Shirt Size?
(g) Explain why Custoner ID should not be used as the attribute test con-
dition even though it has the lowest Gini.
.f. Consider the training examples shown in Table 4.8 for a binary classification
problem.
(a) What is the entropy of this collection of training examples with respect
to the positive class?
4.8 Exercises 199
Table
4.7.DatasetforExercise
2.
CustomerID Gender Car Type Shirt Size Class
1 M Family Small CO
2 M Sports Medium CO
3 M Sports Medium CO
4 M Sports Large CO
b M Sports Extra Large CO
6 M Sports Extra Large CO
F7
I F Sports Small CO
8 F Sports Small CO
9 F Sports Medium CO
10 F Luxury Large CO
11 M Family Large C1
t2 M Family Extra Large C1
13 M Family Medium C1
t4 M Luxury Extra Large C1
15 F Luxury Small C1
16 F Luxury Small C1
77 F Luxury Medium C1
18 F Luxury Medium C1
19 F Luxury Medium C1
20 F Luxury Larqe C1
Table
4,8.DatasetforExercise
3.
Instance aL a2 as Target Class
I TT1.O T
2 TT6.0 -T
3 TF5.O
4 FF4.O -T-
5 F T 7.0
6 FT3.O
a
T FF8.O
8 T F 7.0 -T-
o FT5.O
(b) What are the information gains of o1 and o2 relative to these training
examples?
(") For o3, which is a continuous attribute, compute the information gain for
every possible split.
200 Chapter 4 Classification
(d) What is the best split (among e7t a2t and o3) according to the information
gain?
(e) What is the best split (between 01 and o2) according to the classification
error rate?
(f) What is the best split (between 01 and o2) according to the Gini index?
4. Show that the entropy of a node never increases after splitting it into smaller
successornodes.
A B ClassLabel
I !' -r
T T -r
T T +
T F
T T +
F F
F F
F F
T T
T F
(a) Calculate the information gain when splitting on ,4 and B. Which at-
tribute would the decision tree induction algorithm choose?
(b) Calculate the gain in the Gini index when splitting on .A and B. Which
attribute would the decision tree induction algorithm choose?
(c) Figure 4.13 shows that entropy and the Gini index are both monotonously
increasing on the range [0, 0.5] and they are both monotonously decreasing
on the range [0.5, 1]. Is it possible that information gain and the gain in
the Gini index favor different attributes? Explain.
(a) Compute a two-level decision tree using the greedy approach described in
this chapter. Use the classification error rate as the criterion for splitting.
What is the overall error rate of the induced tree?
(b) Repeat part (a) using X as the first splitting attribute and then choosethe
best remaining attribute for splitting at each of the two successor nodes.
What is the error rate of the induced tree?
(c) Compare the results of parts (a) and (b). Comment on the suitability of
the greedy heuristic used for splitting attribute selection.
7 The following table summarizes a data set with three attributes A, B. C and
two class labels *, -. Build a two-level decision tree.
Number of
A B C Instances
+
T T T ( 0
F T T 0 20
T F T 20 0
F F T 0 K
T T F 0 0
F T F 25 0
T F F 0 0
F F F 0 25
(a) According to the classification error rate, which attribute would be chosen
as the first splitting attribute? For each attribute, show the contingency
table and the gains in classification error rate.
(b) Repeat for the two children of the root node.
(c) How many instances are misclassified by the resulting decision tree?
(d) Repeat parts (a), (b), and (c) using C as the splitting attribute.
(e) Use the results in parts (c) and (d) to conclude about the greedy nature
of the decision tree induction algorithm.
(a) Compute the generalization error rate of the tree using the optimistic
approach.
(b) Compute the generalization error rate of the tree using the pessimistic
approach. (For simplicity, use the strategy of adding a factor of 0.5 to
each leaf node.)
(c) Compute the generalization error rate of the tree using the validation set
shown above. This approach is known as reduced error pruning.
2O2 Chapter 4 Classification
Instance A B c Class
+
2 0 0 +
3 1 +
4 1
5 1 0 0 +
6 1 0 0 +
7 1 1 0
8 1 0 +
I 1 1 0
10 1 1 0
Validation:
lnstance A B c Class
0 0 0 +
12 0 1 1 +
13 1 1 0 +
14 1 0 1
15 1 0 0 +
Figure treeanddatasetsforExercise
4.30.Decision 8.
9 . Consider the decision trees shown in Figure 4.31. Assume they are generated
from a data set that contains 16 binarv attributes and 3 classes,C1, C2, and
C3.
4.31.Decision
Figure forExercise
trees 9.
4.8 Exercises 2Og
C ost (tr ee, dat a) : C ost (tr ee) -f C ost (dat altr ee).
Each internal node of the tree is encoded by the ID of the splitting at-
tribute. If there are rn attributes, the cost of encodins each attribute is
Iog, rn bits.
Each leaf is encoded using the ID of the class it is associated with. If
there are k classes,the cost of encoding a class is log, k bits.
Cost(tree) is the cost of encoding all the nodes in the tree. To simplify the
computation, you can assume that the total cost of the tree is obtained
by adding up the costs of encoding each internal node and each leaf node.
Cost(dataltree) is encoded using the classificationerrors the tree commits
on the training set. Each error is encoded by log2 n bits, where n is the
total number of training instances.
10. While the .632 bootstrap approach is useful for obtaining a reliable estimate of
model accuracy, it has a known limitation 1127]. Consider a two-class problem,
where there are equal number of positive and negative examples in the data.
Suppose the class labels for the examples are generated randomly. The classifier
used is an unpruned decision tree (i.e., a perfect memorizer). Determine the
accuracy of the classifier using each of the following methods.
(a) The holdout method, where two-thirds of the data are used for training
and the remaining one-third are used for testing.
(b) Ten-fold cross-validation.
(c) The .632 bootstrap method.
(d) Flom the results in parts (u), (b), and (c), which method provides a more
reliable evaluation of the classifier's accuracy?
11. Consider the following approach for testing whether a classifier A beats another
classifier B. Let l[ be the size of a given data set, pa be the accuracy of classifier
A, ps be the accuracy of classifier B, and p: (pt +pB)12 be the average
accuracy for both classifiers. To test whether classifier A is significantly better
than B, the following Z-statistic is used:
PA-PB
Table 4.9 compares the accuracies of three different classifiers, decision tree
classifiers, naive Bayes classifiers, and support vector machines, on various data
sets. (The latter two classifiersare described in Chapter 5.)
4.9.Comparing
Table theaccuracy
ofvarious methods.
classification
Data Set Size Decision nalve Support vector
(r/) Tlee (%) Bayes (%) machine (%)
nneal 898 92.09 79.62 87.19
Australia 690 6b.bl 76.81 84.78
Auto 205 81.95 58.05 1 U .1 J
Summarize the performance of the classifiers given in Table 4.9 using the fol-
lowing3 x 3table:
Each cell in the table contains the number of wins. losses, and draws when
comparing the classifier in a given row to the classifier in a given column.
4.8 Exercises 2O5
1 2 . Let X be a binomial random variable with mean lr'p and variance lfp(l -p).
Show that the ratio Xf N also has a binomial distribution with mean p and
v a r i a n c ep ( t - p ) l N .
Classification:
AlternativeTechniques
The previous chapter described a simple, yet quite effective, classification tech-
nique known as decision tree induction. Issues such as model overfitting and
classifier evaluation were also discussedin great detail. This chapter presents
alternative techniques for building classification models-from simple tech-
niques such as rule-based and nearest-neighbor classifiers to more advanced
techniques such as support vector machines and ensemble methods. Other
key issues such as the class imbalance and multiclass problems are also dis-
cussedat the end of the chapter.
Table
5.1.Example
ofa rulesetforthevertebrate problem.
classification
rri (Gives Birth : no) n (Aerial Creature : yes) ------+
Birds
r2i (Gives Birth : no) n (Aquatic Creature : yes) ------+Fishes
13: (Gives Birth : yes) n (Body Temperature : warm-blooded) ------+
Mammals
14: (Gives Birth : no) n (Aerial Creature : no) ----- Reptiles
rbi (Aquatic Creature : semi) ------+
Amphibians
208 Chapter 5 Classification: Alternative Techniques
The left-hand side of the rule is called the rule antecedent or precondition.
It contains a conjunction of attribute tests:
Coverage(r
) !!l
tDl
. :
Accuracy(r) lAnal (5.3)
T
where lAl is the number of records that satisfy the rule antecedent, lA n gl is
the number of records that satisfy both the antecedent and consequent, and
lDl is the total number of records.
5.1 Rule-Based Classifier 2Og
Table5,2.Thevertebrate
dataset.
l\ame lJody Skin Gives Aquatic Aerial -nas lllDer- C'lass Label
Temperature Cover Birth Creature Creature Legs nates
numan warm-blooded nalr yes no no yes no IVlammals
python cold-blooded scales no no no no yes Reptiles
salmon cold-blooded scales no yes no no no Fishes
whale warm-blooded hair yes yes no no no Mammals
froo cold-blooded none no semr no yes yes Amphibians
komodo cold-blooded scales no no no yes no Reptiles
dragon
bat warm-blooded hair yes no yes yes yes Mammals
prgeon warm-blooded feathers no no yes yes no Birds
cat warm-blooded fur yes no no yes no Mammals
guppv cold-blooded scales yes yes no no no Fishes
alligator cold-blooded scales no seml no yes no Reptiles
penguln warm-blooded feathers no semr no yes no Birds
porcuplne warm-blooded quills yes no no yes yes Mammals
eel cold-blooded scales no yes no no no Fishes
salamander cold-blooded none no semr no yes yes Amphibians
Example 5.1. Consider the data set shown in Table 5.2. The rule
has a coverage of 33% since five of the fifteen records support the rule an-
tecedent. The rule accuracy is 100% because all five vertebrates covered by
the rule are mammals. r
o The second vertebrate, which is a turtle, triggers the rules 14 and rs.
Since the classespredicted by the rules are contradictory (reptiles versus
amphibians), their conflicting classesmust be resolved.
The previous example illustrates two important properties of the rule set gen-
erated by a rule-based classifier.
Mutually Exclusive Rules The rules in a rule set .R are mutually exclusive
if no two rules in .R are triggered by the same record. This property ensures
that every record is covered by at most one rule in R. An example of a
mutually exclusive rule set is shown in Table 5.3.
ofa mutually
Table5.3. Example andexhaustive
exclusive ruleset.
Ordered Rules In this approach, the rules in a rule set are ordered in
decreasing order of their priority, which can be defined in many ways (e.g.,
based on accuracy, coverage,total description length, or the order in which
the rules are generated). An ordered rule set is also known as a decision
list. When a test record is presented,it is classifiedby the highest-ranked rule
that covers the record. This avoids the problem of having conflicting classes
predicted by multiple classification rules.
has the following interpretation: If the vertebrate does not have any feathers
or cannot fly, and is cold-blooded and semi-aquatic, then it is an amphibian.
2L2 Chapter 5 Classification: Alternative Techniques
Rule-BasedOrdering Class-BasedOrdering
(SkinCover=feathers,
Aerial9r"31u1s=yes) Aerialgte2luvs=yes)
(SkinCover=feathers,
==> Birds ==> Birds
(Bodytemperature=warm-blooded, (Bodytemperature=warm-blooded,
==> 1446t";.
GivesBirth=yes) ==> Birds
GivesBirth=no)
(Bodytemperature=warm-blooded, (Bodytemperature=warm-blooded,
=-> Birds
GivesBirth=no) ==> J1l36P41t
GivesBirth=yes)
==>4606;6;"n.
(AquaticCreature=semi)) ==>4601'';6'"n.
(AquaticCreature=semi))
(SkinCover=scales,
AquaticCreature=no) ==>Amphibians
(SkinCover=none)
=-> Reptiles
Aquaticgr"s1u1e=no)
(SkinCover=scales,
Aquaticgt"u1u1s=yes)
(SkinCover=scales, ==> Reptiles
==> Fishes
Aquaticgtsslups=yes)
(SkinCover=scales,
==>Amphibians
(SkinCover=none) ==> Fishes
Figure5.1. Comparison
between
rule-based
andclass-based schemes.
ordering
The additional conditions (that the vertebrate does not have any feathers or
cannot fly, and is cold-blooded) are due to the fact that the vertebrate does
not satisfy the first three rules. If the number of rules is large, interpreting the
meaning of the rules residing near the bottom of the list can be a cumbersome
task.
There are two broad classesof methods for extracting classification rules: (1)
direct methods, which extract classification rules directly from data, and (2)
indirect methods, which extract classification rules from other classification
models, such as decision trees and neural networks.
Direct methods partition the attribute spaceinto smaller subspacesso that
all the records that belong to a subspacecan be classifiedusing a single classi-
fication rule. Indirect methods use the classificationrules to provide a succinct
description of more complex classificationmodels. Detailed discussionsof these
methods are presented in Sections 5.1.4 and 5.1.5, respectively.
-+
+,+ T
(a)OriginalData (b)Step1
:R1 i
- : .., .. ... .. i
-+
*+* :R2
Figure5.2.Anexample
ofthesequential algorithm.
covering
5.1 Rule-BasedClassifier2L5
Learn-One-Rule F\rnction
(a) General-to-specif
ic
(b) Specificto-general
Figure
5.3.General{o-specific
andspecific-to-general
rule-growing
strategies.
2L6 Chapter 5 Classification: Alternative Techniques
conjuncts are subsequently added to improve the rule's quality. Figure 5.3(a)
shows the general-to-specific rule-growing strategy for the vertebrate classifi-
cation problem. The conjunct Body Tenperature=warn-blooded is initially
chosento form the rule antecedent. The algorithm then explores all the possi-
ble candidates and greedily choosesthe next conjunct, Gives Birth=yes, to
be added into the rule antecedent. This process continues until the stopping
criterion is met (e.g., when the added conjunct does not improve the quality
of the rule).
For the specific-to-general strategy, one of the positive examples is ran-
domly chosen as the initial seed for the rule-growing process. During the
refinement step, the rule is generalized by removing one of its conjuncts so
that it can cover more positive examples. Figure 5.3(b) shows the specific-to-
general approach for the vertebrate classification problem. Suppose a positive
example for mammals is chosen as the initial seed. The initial rule contains
the same conjuncts as the attribute values of the seed. To improve its cov-
erage, the rule is generalized by removing the conjunct Hibernate=no. The
refinement step is repeated until the stopping criterion is met, e.g., when the
rule starts covering negative examples.
The previous approachesmay produce suboptimal rules becausethe rules
are grown in a greedy fashion. To avoid this problem, a beam search may be
used, where k of the best candidate rules are maintained by the algorithm.
Each candidate rule is then grown separately by adding (or removing) a con-
junct from its antecedent. The quality ofthe candidates are evaluated and the
k best candidates are chosen for the next iteration.
The accuracies for 11 and 12 are 90.9% and 100%, respectively. However,
11 is the better rule despite its lower accuracy. The high accuracy for 12 is
potentially spurious becausethe coverageof the rule is too low.
5.1 Rule-BasedClassifier 2L7
1. A statistical test can be used to prune rules that have poor coverage.
For example, we may compute the following likelihood ratio statistic:
R: 2\ fitosff,;le),
i:I
f , -r1
Laplace : (5.4)
nl k'
f+ -t kp+
m-estimate : I (O.DJ
n-fk
where n is the number of examples covered by the rule, /-p is the number
of positive examplescoveredby the rule, k is the total number of classes,
and p1 is the prior probability for the positive class. Note that the m-
estimate is equivalent to the Laplace measure by choosing p+ : llk.
Depending on the rule coverage, these measures capture the trade-off
218 Chapter 5 Classification: Alternative Techniques
between rule accuracy and the prior probability of the positive class. If
the rule does not cover any training example, then the Laplace mea-
sure reduces to lf k, which is the prior probability of the positive class
assuming a uniform class distribution. The m-estimate also reduces to
the prior probability (p1) when n : 0. However, if the rule coverage
is large, then both measuresasymptotically approach the rule accuracy,
f+ln. Going back to the previous example, the Laplace measure for
11 is 51157 : 89.47To,which is quite close to its accuracy. Conversely,
the Laplace measure for 12 (75%) is significantly lower than its accuracy
because 12 has a much lower coverage.
3. An evaluation metric that takes into account the support count of the
rule can be used. One such metric is the FOILts information gain.
The support count of a rule correspondsto the number of positive exam-
ples covered by the rule. Suppose the rule r : A ------* covers ps positive
examples and ns negative examples. After adding a new conjunct B, the
extended rule r' : A tt,B ------+
* covers p1 positive examples and n1 neg-
ative examples. Given this information, the FOIL's information gain of
the extended rule is defined as follows:
R3 R2
T
+
class= + + +
+
++
class=
Figure5.4. Elimination
of training
records
bythesequential
covering R7,R2, andR3
algorithm.
represent
regions
coveredbythreedifferent
rules.
Figure 5.4 shows three possible rules, R7, R2, and R3, extracted from a
data set that contains 29 positive examples and 21 negative examples. The
accuraciesof .R1,R2, and E3 are I2115 (80%), 7lI0 (70%), and 8f L2 (66.7%),
respectively. .R1 is generated first because it has the highest accuracy. After
generating R1, it is clear that the positive examplescoveredby the rule must be
removed so that the next rule generated by the algorithm is different than .R1.
Next, supposethe algorithm is given the choice of generating either R2 or R3.
Even though R2 has higher accuracy than -R3, Rl and -R3 together cover 18
positive examples and 5 negative examples (resulting in an overail accuracy of
78.3%), whereasR1 and .R2together cover 19 positive examplesand 6 negative
examples (resulting in an overall accuracy of 76%). The incremental impact of
R2 or -R3on accuracy is more evident when the positive and negative exarnples
coveredby -Rl are removed before computing their accuracies.In particular, if
positive examples covered by R1 are not removed, then we may overestimate
the effective accuracy of ,R3, and if negative examples are not removed, then
we may underestimate the accuracy of R3. In the latter caseTwe might end up
preferring R2 over fi3 even though half of the false positive errors committed
by E3 have already been accounted for by the preceding rule, .R1.
22O Chapter 5 Classification: Alternative Techniques
RIPPER Algorithm
To illustrate the direct method, we consider a widely used rule induction algo-
rithm called RIPPER. This algorithm scalesalmost linearly with the number
of training examples and is particularly suited for building models from data
sets with imbalanced class distributions. RIPPER also works well with noisy
data sets becauseit uses a validation set to prevent model overfitting.
For two-class problems, RIPPER choosesthe majority class as its default
classand learns the rules for detecting the minority class. For multiclass prob-
lems, the classesare ordered according to their frequencies.Let (y,Az, . . . ,U")
be the ordered classes,where 91 is the least frequent class and g" is the most
frequent class. During the first iteration, instances that belong to 91 are Ia-
beled as positive examples,while those that belong to other classesare labeled
as negative examples. The sequentialcovering method is used to generaterules
that discriminate between the positive and negative examples. Next, RIPPER
extracts rules that distinguish y2 frorn other remaining classes. This process
is repeated until we are left with g., which is designated as the default class.
Building the Rule Set After generating a rule, all the positive and negative
examples covered by the rule are eliminated. The rule is then added into the
rule set as long as it does not violate the stopping condition, which is based
on the minimum description length principle. If the new rule increasesthe
total description length of the rule set by at least d bits, then RIPPER stops
adding rules into its rule set (by default, d is chosento be 64 bits). Another
stopping condition used by RIPPER is that the error rate of the rule on the
validation set must not exceed 50%.
5.1 Rule-BasedClassifier22L
Rule Set
=-> -
rl: (P=No,Q=No)
12:(P=No,Q=Yes)==2 'i
==1 .'
r3; (P=Yes,Q=No)
14:(P=Yes,R=Yes,Q=No) ==> -
r5: (P=Yes,R=Yes,Q=Yes)==> a
Example 5.2. Consider the following three rules from Figure 5.5:
12: (P : No) A (Q : Yes) ------+
-1-
r3: (P : Yes) n (R: No) -----+
1
r5: (P : Yes) A (R: Yes) n (Q : Yes) ------+
f
Observe that the rule set always predicts a positive class when the value of Q
is Yes. Therefore, we may simplify the rules as follows:
r2t; (Q : Yes) ----+f
r3: (P : Yes) A (R: No) ------+
1.
222 Chapter 5 Classification: Alternative Techniques
Rul+Based
Classifier:
(GivesBirth=No, => Birds
AerialCreature=Yes)
(GivesBirth=No, => Fishes
AquaticCreature=Yes)
(GivesBirth=Yes)=> Mammals
(GivesBirth=No,
AerialCreature=No,
AquaticCreature=No)
=> Reptiles
( ) => Amphibians
Figure5.6. Classification
rulesextracted
froma decision
treeforthevertebrate problem.
classification
Rule Generation Classification rules are extracted for every path from the
root to one of the leaf nodes in the decision tree. Given a classification rule
r : A -------+
gr, we consider a simplified rule, r' : A' + A, where A/ is obtained
by removing one of the conjuncts in A. The simplified rule with the lowest
pessimistic error rate is retained provided its error rate is lessthan that of the
original rule. The rule-pruning step is repeated until the pessimistic error of
the rule cannot be improved further. Becausesome of the rules may become
identical after pruning, the duplicate rules must be discarded.
Rule Ordering After generating the rule set, C4.5rules usesthe class-based
ordering schemeto order the extracted rules. Rules that predict the same class
are grouped together into the same subset. The total description length for
each subset is computed, and the classesare arranged in increasing order of
their total description length. The class that has the smallest description
5.2 Nearest-Neighborclassifiers 223
length is given the highest priority becauseit is expected to contain the best
* gX
set of rules. The total description length for a class is given by tr"*ception
Imodel, where trexceptionis the number of bits neededto encodethe misclassified
examples, Zmodelis the number of bits neededto encode the model, and g is a
tuning parameter whose default value is 0.5. The tuning parameter depends
on the number of redundant attributes present in the model. The value of the
tuning parameter is small if the model contains many redundant attributes.
+ + + +
(a) 1-nearest
neighbor (b)2-nearestneighbor (c) 3-nearestneighbor
Figure5.7. The1-,2-,and3-nearest
neighbors
ofaninstance.
neighbor
Figure5.8. k-nearest withlarge/c.
classification
5.2.L Algorithm
A high-level summary of the nearest-neighbor classification method is given in
Algorithm 5.2. The algorithm computes the distance (or similarity) between
each test example , : (*',y') and all the training examples (x, g) e D to
determine its nearest-neighborlist, D". Such computation can be costly if the
number of training examples is large. However, efficient indexing techniques
are available to reduce the amount of comoutations needed to find the nearest
neighbors of a test example.
a global model that fits the entire input space. Becausethe classification
decisionsare made locally, nearest-neighborclassifiers(with small values
of /c) are quite susceptible to noise.
knowledge of the classeswith new evidence gathered from data. The use of the
Bayes theorem for solving classification problems will be explained, followed
by a description of two implementations of Bayesian classifiers: naiVe Bayes
and the Bayesian belief network.
This question can be answered by using the well-known Bayes theorem. For
completeness, we begin with some basic definitions from probability theory.
Readerswho are unfamiliar with conceptsin probability may refer to Appendix
C for a brief review of this topic.
Let X and Y be a pair of random variables. Their joint probability, P(X :
r,Y : g), refers to the probability that variable X will take on the value
r and variable Y will take on the value g. A conditional probability is the
probability that a random variable will take on a particular value given that the
outcome for another random variable is known. For example, the conditional
probability P(Y :UlX: r) refers to the probability that the variable Y will
take on the value g, given that the variable X is observedto have the value r.
The joint and conditional probabilities for X and Y are related in the following
way:
P(x,Y) : P(Ylx) x P(X) : P(XIY) x P(Y). (5.e)
Rearranging the last two expressionsin Equation 5.9 leads to the following
formula, known as the Bayes theorem:
P(xlY)P(Y)
P(Ylx): (5.10)
P(X)
The Bayes theorem can be used to solve the prediction problem stated
at the beginning of this section. For notational convenience, let X be the
random variable that represents the team hosting the match and Y be the
random variable that representsthe winner of the match. Both X and Y can
5.3 Bayesian Classifiers 229
take on values from the set {0,1}. We can summarize the information given
in the problem as follows:
P (X : rl Y :1 ) x P( Y : 1)
P(Y:llx : 1) :
P(X :1)
P(X :LlY :1) x P(Y : 1)
P ( X : ! , Y : 1 )+ P ( X : 1 , Y : 0 )
P ( X : L I Y: 1 ) x P ( Y : 1 )
P (X : IIY :I) P( Y : 1)* P( X : r lY :O) P( Y : 0)
0.75x 0.35
0.75x0.35+0.3x0.65
: 0.5738.
where the law of total probability (seeEquation C.5 on page 722) was applied
in the secondline. Furthermore, P(Y :OlX : 1) : t - P(Y : llx - 1) :
0.4262. Since P(Y : llx : 1) > P(Y : OlX : 1), Team t has a better
chance than Team 0 of winning the next match.
Before describing how the Bayes theorem can be used for classification, let
us formalize the classification problem from a statistical perspective. Let X
denote the attribute set and Y denote the class variable. If the class variable
has a non-deterministic relationship with the attributes, then we can treat
X and Y as random variables and capture their relationship probabilistically
using P(YIX). This conditional probability is also known as the posterior
probability for Y, as opposed to its prior probability, P(Y).
During the training phase, we need to learn the posterior probabilities
P(ylX) for every combination of X and Y based on information gathered
from the training data. By knowing these probabilities, a test record X' can
be classified by finding the class Yt that maximizes the posterior probability,
230 Chapter 5 Classification: Alternative Techniques
a."'
"""""od"
Figure5.9.Training
setforpredicting problem.
theloandefault
Suppose we are given a test record with the following attribute set: X :
(HoneOwner: No, Marital Status : Married, Annual Income : $120K). To
classify the record, we need to compute the posterior probabilities P(YeslX)
and P(NolX) basedon information available in the training data. If P(YeslX) >
P(NolX), then the record is classified as Yes; otherwise, it is classifiedas No.
Estimating the posterior probabilities accurately for every possible combi-
nation of class labiel and attribute value is a difficult problem because it re-
quires a very large training set, even for a moderate number of attributes. The
Bayes theorem is useful becauseit allows us to expressthe posterior probabil-
ity in terms of the prior probability P(f), the class-conditional probability
P(X|Y), and the evidence,P(X):
P( xlY) x P( Y)
P(ylx): (5.11)
P(x)
When comparing the posterior probabilities for different values of Y, the de-
nominator term, P(X), is always constant, and thus, can be ignored. The
5.3 BayesianClassifiers 23L
prior probability P(f) can be easily estimated from the training set by com-
puting the fraction of training records that belong to each class. To estimate
the class-conditionalprobabilities P(Xlf), we present two implementations of
Bayesian classification methods: the naiVe Bayes classifier and the Bayesian
belief network. These implementations are described in Sections 5.3.3 and
5.3.5,respectively.
&
P ( X I Y: a ) : f r l x S v : 11, (5.12)
i:l
Conditional Independence
Before delving into the details of how a naive Bayes classifier works, let us
examine the notion of conditional independence. Let X, Y, and Z denote
three sets of random variables. The variables in X are said to be conditionally
independent of Y, given Z, 1f the following condition holds:
P\X,Y,Z)
P(x,Ylz) :
P(Z)
P (
-FEqX .Y , Z ) , . P ( Y . Z )
^
P(z)
P(xlY,z) x P(vlz)
P(xlz)x P(Ylz), (5.14)
where Equation 5.13 was used to obtain the last line of Equation 5.14.
Since P(X) is fixed for every Y, it is sufficient to choosethe class that maxi-
mizes the numerator term, p(V)l[i:tP(X,lY). In the next two subsections,
we describe several approaches for estimating the conditional probabilities
P(X,lY) for categorical and continuous attributes.
There are two ways to estimate the class-conditional probabilities for contin-
uous attributes in naive Bayes classifiers:
1. We can discretize each continuous attribute and then replace the con-
tinuous attribute value with its corresponding discrete interval. This
approach transforms the continuous attributes into ordinal attributes.
The conditional probability P(X,IY : U) is estimated by computing
the fraction of training records belonging to class g that falls within the
corresponding interval for Xi. The estimation error depends on the dis-
cretization strategy (as described in Section 2.3.6 on page 57), as well as
the number of discrete intervals. If the number of intervals is too large,
there are too few training records in each interval to provide a reliable
estimate for P(XrlY). On the other hand, if the number of intervals
is too small, then some intervals may aggregate records from different
classesand we may miss the correct decision boundary.
., tt:j)2
_(tt_
zofi
P(Xi: r,ilY : y) : -)- exp (5.16)
1/2troii
r 2 5 + 1 0 0 + 7 0 + . . . + 7 5:
110
(
, ( 1 2 5 1 1 0 ) 2+ ( 1 0 0- 1 1 0 ) 2+ . . . + ( 7 5- 1 1 0 ) 2
-
:2975
7(6)
s: t/2975:54.54.
234 Chapter 5 Classification: Alternative Techniques
Given a test record with taxable income equal to $120K, we can compute
its class-conditionalprobability as follows:
:
P(rncome=12olNo) : 0.0072.
6h.b4)"*p-95#f
f rtle
P ( * o< X ; I r i * e l y : y r 1 : I fqo;ttij,oij)dxi
J:r'
= f (rt; ttti,o,ii) x e. (5.17)
Consider the data set shown in Figure 5.10(a). We can compute the class-
conditional probability for each categorical attribute, along with the sample
mean and variance for the continuous attribute using the methodology de-
scribed in the previous subsections. These probabilities are summarized in
Figure 5.10(b).
To predict the class label of a test record ;q : (HomeOwner:No, Marital
Status : Married, Income : $120K), we need to compute the posterior prob-
abilities P(UolX) and P(YeslX). Recall from our earlier discussionthat these
posterior probabilities can be estimated by computing the product between
the prior probability P(Y) and the class-conditionalprobabilitiesll P(X,lY),
which corresponds to the numerator of the right-hand side term in Equation
5.15.
The prior probabilities of each class can be estimated by calculating the
fraction of training records that belong to each class. Since there are three
records that belong to the classYes and sevenrecords that belong to the class
5.3 Bayesian Classifiers 235
P(HomeOwner=YeslNo) = 317
P(HomeOwner=NolNo) = 4fl
P(HomeOwner=YeslYes) =0
P(HomeOwner=NolYes) =1
P(Marital = 2n
Status=SinglelNo)
Yes 125K
No)= 1/7
P(MaritalStatus=Divorcedl
No 100K = 4t7
P(MaritalStatus=MarriedlNo)
No 70K = 2/3
P(MaritalStatus=SinglelYes)
Yes 120K = 1/3
P(MaritalStatus=DivorcedlYes)
=0
P(MaritalStatus=MarriedlYes)
No 95K
No 60K ForAnnualIncome:
Yes 220K lf class=No:samplemean=110
No 85K samplevariance=2975
No 75K samplemedn=90
lf class=Yes:
samplevariance=2S
No 90K
(a) (b)
Figure
5.10.Thenalve
Bayes
classifier problem.
fortheloanclassification
No, P(Yes) :0.3 and P(no) :0.7. Using the information provided in Figure
5.10(b), the class-conditionalprobabilities can be computed as follows:
The naive Bayes classifier will not be able to classify the record. This prob-
lem can be addressed by using the m-estimate approach for estimating the
conditional probabilities :
!!
P (r,l a- ) : ?s! , (5.18)
n+Tn
where n is the total number of instances from class 3ry,n" is the number of
training examples from class gi that take on the value ri, rrl is a parameter
known as the equivalent sample size, and p is a user-specifiedparameter. If
there is no training set available (i.e., n:0), then P(rilyi) : p. Therefore
p can be regarded as the prior probability of observing the attribute value
ri among records with class 97. The equivalent sample size determines the
tradeoff between the prior probability p and the observed probability n.f n.
In the example given in the previous section, the conditional probability
P(Status : MarriedlYes) : 0 because none of the training records for the
class has the particular attribute value. Using the m-estimate approach with
m:3 and p :113, the conditional probability is no longer zero:
If we assumep : If 3 for all attributes of class Yes and p : 213 for all
attributes of class No. then
o They are robust to isolated noise points becausesuch points are averaged
out when estimating conditional probabilities from data. Naive Bayes
classifiers can also handle missing values by ignoring the example during
model building and classification.
P ( A : O l y : 0 ) P ( B : O l y : O ) P ( Y: 0 )
P(Y:0lA:0, B : 0) :
P(A:0, B : 0)
0 . 1 6x P ( Y : 0 )
P(A:0, B : 0)'
P ( A : O l y : I ) P ( B : O l y: l ) P ( Y : 1 )
P(Y : IlA:0,8 : 0) :
P(A:0, B : 0)
0 . 3 6x P ( Y 1 )
:
P(A:0, B : 0)'
If P(Y - 0) : P(Y : 1), then the naiVe Bayes classifier would assign
the record to class 1. However, the truth is,
P ( A : 0 ,B : O l Y : 0 ) : P ( A : 0l)' : 0) : 0.4,
P ( A : 0 , 8 : O l Y : 0 ) P ( Y: 0 )
P(Y :0lA:0, B : 0) :
P(A:0,8:0)
0 . 4x P ( Y : 0 )
P(A :0,8 : 0 )'
which is larger than that for Y : 1. The record should have been
classifiedas class 0.
\, Crocodile
\
\
\
\
\
\
\
\
\
\
\
\
\
5 10 tu
Length,*
Figure5.11.Comparing
thelikelihood of a crocodile
functions andanalligator.
:
P(Xlcrocodile) ( 5.1e)
#"""0 1
:
P(Xlnrri.gator) ;(ry)'l (5.20)
#""*o[ "(ry)')
Figure 5.11 shows a comparison between the class-conditionalprobabilities
for a crocodile and an alligator. Assuming that their prior probabilities are
the same, the ideal decision boundary is located at some length i such that
(ft-rb\2
:\ /i-r2\2
\ , / , /'
which can be solved to yield f : 13.5. The decision boundary for this example
is located halfway between the two means. r
24O Chapter 5 Classification: Alternative Techniques
Figure probabilistic
5.12.Representing relationships
usingdirected graphs.
acyclic
When the prior probabilities are different, the decision boundary shifts
toward the class with lower prior probability (see Exercise 10 on page 319).
Furthermore, the minimum error rate attainable by any classifieron the given
data can also be computed. The ideal decision boundary in the preceding
example classifies all creatures whose lengths are less than ft as alligators and
those whose lengths are greater than 0 as crocodiles. The error rate of the
classifier is given by the sum of the area under the posterior probability curve
for crocodiles (from length 0 to i) and the area under the posterior probability
curve for alligators (from f to oo):
Model Representation
1. If a node X does not have any parents, then the table contains only the
prior probability P(X).
2. If a node X has only one parent, Y, then the table contains the condi-
tional probability P(XIY).
3. If a node X has multiple parents, {Yt,Yz, . . . ,Yn}, then the table contains
the conditionalprobability P(XlY,Yz,. . ., Yr.).
242 Chapter 5 Classification: Alternative Techniques
Hb=Yes
HD=Yes
D=Healthy 0.2
E=Yes D=Unhealthy 0.85
D=Heatthy 0.25
E=Yes 0.45
D=Unhealthl
E=No
D=Healthy 0.55
E=No
0.75 CP=Yes
D=Unhealth!
HD=Yes
Hb=Yes
0.8
HD=Yes
Hh=Nn
0.6
HD=No
0.4
Hb=Yes
HD=No
Hb=No
0.1
Figure5.13.A Bayesian
beliefnetwork
fordetecting
heartdisease in patients.
andheartburn
Model Building
Model building in Bayesiannetworks involves two steps: (1) creating the struc-
ture of the network, and (2) estimating the probability values in the tables
associatedwith each node. The network topology can be obtained by encod-
ing the subjective knowledge of domain experts. Algorithm 5.3 presents a
systematic procedure for inducing the topology of a Bayesian network.
Example 5.4. Consider the variables shown in Figure 5.13. After performing
Step 1, Iet us assume that the variables are ordered in the following way:
(E,D,HD,H\,CP,BP). From Steps 2 to 7, starting with variable D, we
obtain the following conditional probabilities:
. P(DIE) is simplified to P(D).
Supposewe are interested in using the BBN shown in Figure 5.13 to diagnose
whether a person has heart disease. The following cases illustrate how the
diagnosis can be made under different scenarios.
Without any prior information, we can determine whether the person is likely
to have heart disease by computing the prior probabilities P(HD : Yes) and
P(HD: No). To simplify the notation, let a € {Yes,No} denote the binary
values of Exercise and B e {Healthy,Unhealthy} denote the binary values
of Diet.
P(HD:ves) : ttP(Hn
: Y e s l E : ( t , D : P ) P ( E: a , D : 0 )
d13
: ttp(uo : yesl,O
: (t,D: ilP(E : a)P(D: g)
aR
0 . 2 5x 0 . 7x 0 . 2 5+ 0 . 4 5x 0 . 7x 0 . 7 5 + 0 . 5 5x 0 . 3x 0 . 2 5
+ 0.75x 0.3 x 0.75
0.49.
Since P(HD - no) - 1 - P(ttO : yes) : 0.51, the person has a slightly higher
chance of not getting the disease.
5.3 Bayesian Classifiers 245
If the person has high blood pressure)we can make a diagnosis about heart
disease by comparing the posterior probabilities, P(HD : YeslBP : High)
against P(ttO : Nolnt : High). To do this, we must compute P(Be : High):
P (n e:H i g h ) : frl n r
: HighlHD:7) p( HD
:7)
where 7 € {Yes, No}. Therefore, the posterior probability the person has heart
diseaseis
Supposewe are told that the person exercisesregularly and eats a healthy diet.
How does the new information affect our diagnosis? With the new information,
the posterior probability that the person has heart diseaseis
The model therefore suggests that eating healthily and exercising regularly
may reduce a person's risk of getting heart disease.
Characteristics of BBN
3 . Bayesian networks are well suited to dealing with incomplete data. In-
stanceswith missing attributes can be handled by summing or integrat-
ing the probabilities over all possible values of the attribute.
5.4.1 Perceptron
Consider the diagram shown in Figure 5.14. The table on the left shows a data
set containing three boolean variables (*t, ,r, 13) and an output variable, gr,
that takes on the value -1 if at least two of the three inputs arezero, and +1
if at least two of the inputs are greater than zero.
X1 X2 x3 v Input
-1 nodes
1 0 0
1 0 1 1
1 1 0 1
1 1 1 1
0 0 "l -1
0 1 0 -1
0 1 1 1
0 0 0 -1
(a)
(a) Data
Dataset.
set. (b) perceptron.
Figure5.14.Modeling
a boolean usinga perceptron.
function
- t),
0 : s i ' g n ( w d r d+ w d - t . x d - t + . . . + w z r z I w p 1 (5.22)
w h e r eu ) r t l r 2 t . . . , 1 1 )adr.e t h e w e i g h t so f t h e i n p u t l i n k s a n d 1 1 ,r 2 t . . . ) r d a r e
the input attribute values. The sign function, which acts as an activation
function for the output neuron, outputs a value *1 if its argument is positive
and -1 if its argument is negative. The perceptron model can be written in a
more compact form as follows:
where u)0: -t, frl: I, and w'x is the dot product betweenthe weight vector
w and the input attribute vector x.
where tr(k) is the weight parameter associatedwith the i'h input link after the
kth ileration, ) is a parameter known as the learning rate, and rii is the
value of lhe jth attribute of the training example 4. The justification for the
weight update formula is rather intuitive. Equation 5.24 shows that the new
weight ?r(k+l) is a combination of the old weight tr(ft) and a term proportional
5.4 Artificial Neural Network (ANN) 249
to the prediction error, (g - i). If the prediction is correct, then the weight
remains unchanged. Otherwise, it is modified in the following ways:
In the weight update formula, Iinks that contribute the most to the error term
are the ones that require the largest adjustment. However, the weights should
not be changed too drastically becausethe error term is computed only for
the current training example. Otherwise, the adjustments made in earlier
iterations will be undone. The learning rate ), a parameter whose value is
between 0 and 1, can be used to control the amount of adjustments made in
each iteration. If ,\ is close to 0, then the new weight is mostly influenced
by the value of the old weight. On the other hand, if ) is close to 1, then
the new weight is sensitive to the amount of adjustment performed in the
current iteration. In some cases)an adaptive ) value can be used; initially, ,\
is moderately large during the first few iterations and then gradually decreases
in subsequentiterations.
The perceptron model shown in Equation 5.23 is linear in its parameters
w and attributes x. Because of this, the decision boundary of a perceptron,
which is obtained by setting 0 : 0, is a linear hyperplane that separatesthe
data into two classes,-1 and *1. Figure 5.15 shows the decision boundary
25O Chapter 5 Classification: Alternative Techniques
0.5
x1
Figure5.15.Perceptron
decision forthedatagivenin Figure
boundary 5.14,
obtained by applying the perceptron learning algorithm to the data set given in
Figure 5.14. The perceptron learning algorithm is guaranteedto convergeto an
optimal solution (as long as the learning rate is sufficiently small) for linearly
separable classification problems. If the problem is not linearly separable,
the algorithm fails to converge. Figure 5.16 shows an example of nonlinearly
separable data given by the XOR function. Perceptron cannot find the right
solution for this data becausethere is no linear hyperplane that can perfectly
separate the training instances.
1.5
Xe 0.5
-0.5L
-0.5 0.5 1.5
x1
Figure problem.
5.16.XORclassification Nolinear
hyperplane thetwoclasses.
canseparate
5.4 Artificial Neural Network (ANN) 251
1. The network may contain several intermediary layers between its input
and output layers. Such intermediary layers are called hidden layers
and the nodes embedded in these layers are called hidden nodes. The
resulting structure is known as a multilayer neural network (see Fig-
ure 5.17). In a feed-forward neural network, the nodes in one layer
Figure5.17.Example
of a multilayer
feed{onrvard neural
artificial (ANN).
network
are connected only to the nodes in the next layer. The perceptron is a
single-layer, feed-forward neural network because it has only one layer
of nodes-the output layer that performs complex mathematical op-
erations. fn a recurrent neural network, the links may connect nodes
within the same layer or nodes from one layer to the previous layers.
2. The network may use types of activation functions other than the sign
function. Examples of other activation functions include linear, sigmoid
(logistic), and hyperbolic tangent functions, as shown in Figure 5.18.
These activation functions allow the hidden and output nodes to produce
output values that are nonlinear in their input parameters.
Sign function
-1.5
-0.5 0 0.s 1 -1 -05 0 0.5
Figure5,18.Types functions
ofactivation inartificial networks.
neural
ample, consider the XOR problem described in the previous section. The in-
stancescan be classifiedusing two hyperplanes that partition the input space
into their respective classes,as shown in Figure 5.19(a). Because a percep-
tron can create only one hyperplane, it cannot find the optimal solution. This
problem can be addressed using a two-layer, feed-forward neural network, as
shown in Figure 5.19(b). Intuitively, we can think of each hidden node as a
perceptron that tries to construct one of the two hyperplanes, while the out-
put node simply combines the results of the perceptrons to yield the decision
boundary shown in Figure 5.19(a).
To learn the weights of an ANN model, we need an efficient algorithm
that converges to the right solution when a sufficient amount of training data
is provided. One approach is to treat each hidden node or output node in
the network as an independent perceptron unit and to apply the same weight
update formula as Equation 5.24. Obviously, this approach will not work
because we lack a priori, knowledge about the true outputs of the hidden
nodes. This makes it difficult to determine the error term, (gr- f), associated
5.4 Artificial Neural Network (ANN) 253
1.5
X z0 5
0- 0 . 5
0.5
x1
Figure5.19.A two-layer,
feed{orward
neural fortheXORproblem.
network
with each hidden node. A methodology for learning the weights of a neural
network based on the gradient descent approach is presented next.
1 'A{
:
E(w) (5.25)
il,rr,-o)2.
;-l
Note that the sum of squared errors depends on w becausethe predicted class
f is a function of the weights assignedto the hidden and output nodes. Figure
5.20 shows an example of the error surface as a function of its two parameters,
to1 and 'u2. This type of error surface is typically encountered when Qi is a
linear function of its parameters, w. If we replace Q : w 'x into Equation
5.25, then the error function becomesquadratic in its parameters and a global
minimum solution can be easily found.
In most cases)the output of an ANN is a nonlinear function of its param-
eters because of the choice of its activation functions (e.g., sigmoid or tanh
function). As a result, it is no longer straightforward to derive a solution for
w that is guaranteed to be globally optimal. Greedy algorithms such as those
based on the gradient descent method have been developed to efficiently solve
the optimization problem. The weight update formula used by the gradient
254 Chapter 5 Classification: Alternative Techniques
E(w1,w2)
1.8
1.6
1.4
1.2
1
I
0'l
w1
5,20.Enorsudace
Figure E(w1,w2)fora two-parameter
model.
W r; + ' l I ; - .OE(w)
A---^ (5.26)
r
dw.i
where ) is the Iearning rate. The secondterm states that the weight should be
increasedin a direction that reducesthe overall error term. However, because
the error function is nonlinear, it is possible that the gradient descentmethod
may get trapped in a local minimum.
The gradient descent method can be used to learn the weights of the out-
put and hidden nodes ofa neural network. For hidden nodes, the computation
is not trivial becauseit is difficult to assesstheir error Lerm,0Ef 0t17, without
knowing what their output values should be. A technique known as back-
propagation has been developed to address this problem. There are two
phasesin each iteration of the algorithm: the forward phase and the backward
phase. During the forward phase, the weights obtained from the previous iter-
ation are used to compute the output value of each neuron in the network. The
computation progressesin the forward direction; i.e., outputs of the neurons
at level k are computed prior to computing the outputs at level /c + 1. Dur-
ing the backward phase, the weight update formula is applied in the reverse
direction. In other words, the weights at level k + 7 arc updated before the
weights at level k are updated. This back-propagation approach allows us to
use the errors for neurons at layer k + t to estimate the errors for neurons at
Iaver k.
5.4 Artificial Neural Network (ANN) 255
3. The network topology (e.g., the number of hidden layers and hidden
nodes, and feed-forward or recurrent network architecture) must be se-
lected. Note that the target function representation depends on the
weights of the links, the number of hidden nodes and hidden layers, bi-
ases in the nodes, and type of activation function. Finding the right
topology is not an easy task. One way to do this is to start from a fully
connected network with a sufficiently large number of nodes and hid-
den layers, and then repeat the model-building procedure with a smaller
number of nodes. This approach can be very time consuming. Alter-
natively, instead of repeating the model-building procedure, we could
remove some of the nodes and repeat the model evaluation procedure to
select the right model complexity.
1. Multilayer neural networks with at least one hidden layer are univer-
sal approximators; i.e., they can be used to approximate any target
functions. Since an ANN has a very expressivehypothesis space,it is im-
portant to choosethe appropriate network topology for a given problem
to avoid model overfitting.
256 Chapter 5 Classification: Alternative Techniques
.t. Neural networks are quite sensitive to the presenceof noise in the train-
ing data. One approach to handling noise is to use a validation set to
determine the generalization error of the model. Another approach is to
decreasethe weight by some factor at each iteration.
4 . The gradient descent method used for learning the weights of an ANN
often convergesto somelocal minimum. One way to escapefrom the local
minimum is to add a momentum term to the weight update formula.
tr
ir. Training an ANN is a time consuming process,especiallywhen the num-
ber of hidden nodes is large. Nevertheless,test examplescan be classified
rapidly.
Figure 5.21 shows a plot of a data set containing examples that belong to
two different classes,represented as squares and circles. The data set is also
linearly separable; i.e., we can find a hyperplane such that all the squares
reside on one side of the hyperplane and all the circles reside on the other
D.D Support Vector Machine (SVM) 257
oo
II o
o
I
oo
T
oo
T
I
TT ooo
o
T lr OO
T TI oo
I
oo
o
o
Figure5.21.Possible
decision
boundaries
fora linearly dataset.
separable
side. However, as shown in Figure 5.21, there are infinitely many such hyper-
planes possible. Although their training errors are zerol there is no guarantee
that the hyperplaneswill perform equally well on previously unseenexamples.
The classifier must choose one of these hyperplanes to represent its decision
boundary, based on how well they are expected to perform on test examples.
To get a clearer picture of how the different choicesof hyperplanes affect the
generalization errors, consider the two decision boundaries, ,B1and 82, shown
in Figure 5.22. Both decision boundaries can separate the training examples
into their respective classeswithout committing any misclassification errors.
Each decision boundary B; is associatedwith a pair of hyperplanes, denoted
as bi1 and bi2, respectively. b61is obtained by moving a parallel hyperplane
away from the decisionboundary until it touchesthe closestsquare(s),whereas
b;2 is obtained by moving the hyperplane until it touches the closest circle(s).
The distance between these two hyperplanes is known as the margin of the
classifier. From the diagram shown in Figure 5.22,notice that the margin for
.B1 is considerably larger than that for Bz. In this example, 81 turns out to
be the maximum margin hyperplane of the training instances.
+---- - - - - - - - -- - -- t
brr marginfor 81 bp
Figure5.22.Margin
ofa decision
boundary.
any slight perturbations to the decision boundary can have quite a significant
impact on its classification. Classifiersthat produce decision boundaries with
small margins are therefore more susceptible to model overfitting and tend to
generalizepoorly on previously unseen examples.
A more formal explanation relating the margin of a linear classifier to its
generalization error is given by a statistical learning principle known as struc-
tural risk minimization (SRM). This principle provides an upper bound to
the generalization error of a classifier (R) in terms of its training error (,R"),
the number of training examples (l/), and the model complexity otherwise
known as its capacity (h). More specifically,with a probability of 1 - q, the
generalization error of the classifier can be at worst
R1R.+r(*, (5.27)
w.x*b:0, (5.28)
\n/.Xo*b:0,
w'xbfb:0.
w. (xa- xr) : 0,
260 Chapter 5 Classification: Alternative Techniques
Figure5.23.Decision
boundary
andmargin
ofSVM.
where k > 0. Similarly, for any circle x. located below the decision boundary,
we can show that
w.Xc *b:k" (5.30)
where k' < 0. If we label all the squares as class f 1 and all the circles as
class -1, then we can predict the class label gr for any test example z in the
following way:
I l, if w.z*b>0;
(5.31)
':t -t. ifw.z+b<0.
Consider the square and the circle that are closest to the decision boundary.
Since the square is located above the decision boundary, it must satisfy Equa-
tion 5.29 for some positive value k, whereas the circle must satisfy Equation
5.5 Support Vector Machine (SVM) 261
5.30 for some negative value kt. We can rescale the parameters w and b of
the decision boundary so that the two parallel hyperplanes b;1 and bi2 can be
expressed as follows:
b i 1: w . x * b : 1 , (5.32)
bn; w 'x*b--1. (5.33)
The margin of the decision boundary is given by the distance between these
two hyperplanes. To compute the margin, Iet x1 be a data point located on
br1and x2 be a data point on bi2, as shown in Figure 5.23. Upon substituting
these points into Equations 5.32 and 5.33, the margin d can be computed by
subtracting the second equation from the first equation:
w ' ( x r - x z ): 2
l l w lxl d : 2
,.
(5.34)
ll*ll'
Learning a Linear SVM Model
The training phase of SVM involves estimating the parameters w and b of the
decision boundary from the training data. The parameters must be chosenin
such a way that the following two conditions are met:
These conditions impose the requirements that all training instances from
class gr : 1 (i.e., the squares) must be located on or above the hyperplane
w.x* b: L, while those instancesfrom classU: -I (i.e.,the circles)must
be located on or below the hyperplane w .x * b - -1. Both inequalities can
be summarized in a more comoact form as follows:
Although the preceding conditions are also applicable to any linear classi-
fiers (including perceptrons), SVM imposesan additional requirement that the
margin of its decision boundary must be maximal. Maximizing the margin,
however, is equivalent to minimizing the following objective function:
f r * r' 2: l l w l l 2 (5.37)
262 Chapter 5 Classification: Alternative Techniques
Definition 5.1 (Linear SVM: Separable Case). The learning task in SVM
can be formalized as the following constrained optimization problem:
mln
, -Il*ll'
subject
to ,it* i * ,, - ,, ,i: r,2,.. . ,N.
Since the objective function is quadratic and the constraints are linear in
the parameters w and b, this is known as a convex optimization problem,
which can be solved using the standard Lagrange multiplier method. Fol-
lowing is a brief sketch of the main ideas for solving the optimization problem.
A more detailed discussionis given in Appendix E.
First, we must rewrite the objective function in a form that takes into
account the constraints imposed on its solutions. The new objective function
is known as the Lagrangian for the optimization problem:
where the parameters ); are called the Lagrange multipliers. The first term in
the Lagrangian is the same as the original objective function, while the second
term captures the inequality constraints. To understand why the objective
function must be modified, consider the original objective function given in
Equation 5.37. It is easy to show that the function is minimized when w : 0, a
null vector whose components are all zeros. Such a solution, however, violates
the constraints given in Definition 5.1 because there is no feasible solution
for b. The solutions for w and b are infeasible if they violate the inequality
constraints; i.e., if Ailw-xi+b) - 1 < 0. The Lagrangian given in Equation 5.38
incorporates this constraint by subtracting the term from its original objective
function. Assuming that .\; > 0, it is clear that any infeasible solution may
only increasethe value of the Lagrangian.
To minimize the Lagrangian, we must take the derivative of ,Lp with respect
to w and b and set them to zero:
l/
OL, - U^+ w : \-.
) Ai?Jtxi, (5.3e)
0w ^Lt
t-1
1V
oLo - U^+\ -
) A;tt;:U. (5.40)
ab ,Lt
i,:1
5.5 Support Vector Machine (SVM) 263
Because the Lagrange multipliers are unknown, we still cannot solve for w and
b. If Definition 5.1 contains only equality instead of inequality constraints, then
we can use the ly' equations from equality constraints along with Equations
5.39 and 5.40 to find the feasible solutions for w, b, and ).;. Note that the
Lagrange multipliers for equality constraints are free parameters that can take
any values.
One way to handle the inequality constraints is to transform them into a
set of equality constraints. This is possible as long as the Lagrange multipliers
are restricted to be non-negative. Such transformation leads to the following
constraints on the Lagrange multipliers, which are known as the Karush-Kuhn-
T\rcker (KKT) conditions:
);)0, (5.41)
\u[sn(* 'xt * b) - 1] : O. (5.42)
At first glance, it may seem that there are as many Lagrange multipli-
ers as there are training instances. It turns out that many of the Lagrange
multipliers become zero after applying the constraint given in trquation 5.42.
The constraint states that the Lagrange multiplier Aamust be zero unless the
training instance x6 satisfies the equation At(w .xt I b) : t. Such training
instance, with )i ) 0, lies along the hyperplanes bi1 or b.i2and is known as a
support vector. taining instancesthat do not reside along these hyperplanes
have )6:0. Equations 5.39 and5.42 also suggestthat the parametersw and
b, which define the decision boundary, depend only on the support vectors.
Solving the preceding optimization problem is still quite a daunting task
becauseit involves a large number of parameters: w, b, and )a. The problem
can be simplified by transforming the Lagrangian into a function of the La-
grange multipliers only (this is known as the dual problem). To do this, we
first substitute Equations 5.39 and 5.40 into Equation 5.38. This will lead to
the following dual formulation of the optimization problem:
,A|I
The key differences between the dual and primary Lagrangians are as fol-
lows:
1. The dual Lagrangian involves only the Lagrange multipliers and the
training data, while the primary Lagrangian involves the Lagrange mul-
tipliers as well as parameters of the decision boundary. Nevertheless,the
solutions for both optimization problems are equivalent.
264 Chapter 5 Classification: Alternative Techniques
2. The quadratic term in Equation 5.43 has a negative sign, which means
that the original minimization problem involving the primary Lagrangian,
Lp, has turned into a maximization problem involving the dual La-
grangian, ,L2.
For large data sets, the dual optimization problem can be solved using
numerical techniques such as quadratic programming, a topic that is beyond
the scope of this book. Once the );'s are found, we can use Equations 5.39
and 5.42 to obtain the feasible solutions for w and b. The decision boundary
can be expressed as follows:
/N \
(f\yn*t'*)+b:0. (5.44)
\- /
b is obtained by solving Equation 5.42 for the support vectors. Becausethe )1's
are calculated numerically and can have numerical errors, the value computed
for b may not be unique. Instead it depends on the support vector used in
Equation 5.42. In practice, the averagevalue for b is chosento be the parameter
of the decision boundary.
Example 5.5. Consider the two-dimensional data set shown in Figure 5.24,
which contains eight training instances. Using quadratic programming, we can
solve the optimization problem stated in Equation 5.43 to obtain the Lagrange
multiplier .\6for each training instance. The Lagrange multipliers are depicted
in the last column of the table. Notice that only the fi.rst two instances have
non-zero Lagrange multipliers. These instances correspond to the support
vectors for this data set.
Let w : (1q,w2) and b denote the parameters of the decision boundary.
Using Equation 5.39, we can solve for w1 and w2 in the following way:
wr : -1
D \ n r o r o r : 6 5 . 5 6 2 L x 1 x 0 . 3 8 5 8 + 6 5 . 5 6 2x 1 x 0 ' 4 8 7 r : - 6 ' 6 4 .
z
\-, \;Apn:65.562L x 1x
w2 : ) 7 6 5 . 5 6 2 1 x- 1 x 0 . 6 1 1: - 9 ' 3 2 .
0 . 4 6 8+
2
The bias term b can be computed using Equation 5.42 for each support vector:
Lagrange
X1 X2 v Multiplier
0.3858 0.4687 1 65.5261
0.4871 0.611 -1 65.5261
0.9218 0.4103 -1 0
o.7382 0.8936 -1 0
0.1763 0.0579 1 0
o.4057 0.3529 1 0
0.9355 0.8132 -1 0
0.2146 0.0099 1 0
1
0.9
-6.64 x1 - 9.32 x2 + 7.93 = 0 tr
0.8 tr
0.7
0.6 tr
s 0.5
o
0.4 n
0.3
o
o.2
0.1
0
o
0 0.2 o.4 0.6 0.8 1
X1
Figure
5.24.Example
ofa linearly
separable
dataset.
Once the parameters of the decision boundary are found, a test instance z
is classified as follows:
/N \
n
o\,,o
margin
tore2'l o
r oo
r al'', o
r r l'\,, o
llf'. o OO
OO
o
!t.
o
['? o
f; ;;;;i" ro,,
i,--'o',
Figure
5.25,Decision
boundary
ofSVMforthenonseparable
case.
5.5 Support Vector Machine (SVM) 267
w.x+b=0
1.2
trtrtr
tr
t1
LI
0.8
tr
N
X .w.x+b=-1+(
Figure
5,26.Slack fornonseparable
variables data.
While the original objective function given in Equation 5.37 is still appli-
cable, the decision boundary .B1 no longer satisfies all the constraints given
in Equation 5.36. The inequality constraints must therefore be relaxed to ac-
commodate the'nonlinearly separable data. This can be done by introducing
positive-valued slack variables ({) into the constraints of the optimization
problem, as shown in the following equations:
v/'xi+b>7-€n ify6-I,
w.xi+b<-1+€, if ya: -1, (5.45)
where Vz : {6 > 0.
To interpret the meaning of the slack variables {a, consider the diagram
shown in Figure 5.26. The circle P is one of the instances that violates the
constraintsgiven in Equation 5.35. Let w.x* b: -7 *{ denote a line that
is parallel to the decision boundary and passesthrough the point P. It can be
shown that the distance between this line and the hyperplane w'x * b: -L
is {/llwll. Thus, { provides an estimate of the error of the decision boundary
on the training example P.
In principle, we can apply the same objective function as before and impose
the conditions given in Equation 5.45 to find the decision boundary. However,
268 Chapter 5 Classification: Alternative Techniques
T\
,ll \
\
o
a\, o
ll
t.,, I
\,I I
I
I r-\' . r
I ll.
P
T I og\
I o
I
o
Figure5.27.A decision
boundary
thathasa widemargin
butlargetraining
error.
N
.
llwll2
,f(*):#* c(D,to)*,
i,:L
.l/ N ,^/
L p : | l lOw l l 2 +' c* t FL \,1 - S-. .
- r . -) - S-\
(5.46)
rl "ll LAt|at(w.xi +b) 1+€ri \u&,
i:7 i:I i:l
where the first two terms are the objective function to be minimized, the third
term representsthe inequality constraints associatedwith the slack variables,
D.O Support Vector Machine (SVM) 269
and the last term is the result of the non-negativity requirements on the val-
ues of {,;'s. F\rrthermore, the inequality constraints can be transformed into
equality constraints using the following KKT conditions:
Note that the Lagrange multiplier ,\; given in Equation 5.48 is non-vanishing
only if the training instance resides along the lines w'x, * b : il or has
€r > O. On the other hand, the Lagrange multipliers;.ri given in Equation 5.49
arezero for any training instances that are misclassified(i.e., having {z > 0).
Setting the first-order derivative of ,L with respect to w, b, and {; to zero
would result in the following equations:
-^r .lr
AL s-.
: t \&r.ru. (5,50)
'' -
) AiU&ii:u 1uj
A u tr: /-t
i:7
N .lr
AL : - t \ r a r : Q
aI)nr,:0. (551)
0b i:t i:l
AL - (5"52)
C - \t - t-tt- Q 4 \tl l.tt: C.
otr,
Substituting Equations 5.50, 5.51, and 5.52 into the Lagrangian will pro-
duce the following dual Lagrangian:
1.----
Lp : iD,xoxtoiaixi.xi + Cl€t
z-
L,J z
-\ ^o{r,(\^tot*0. xi f b) - 1 + €,}
-\,-\;)€'
Nl
which turns out to be identical to the dual Lagrangian for linearly separable
data (see Equation 5.40 on page 262). Nevertheless, the constraints imposed
27O Chapter 5 Classification: Alternative Techniques
on the Lagrange multipliers )1's are slightly different those in the linearly
separable case. In the linearly separable case, the Lagrange multipliers must
be non-negative, i.e., )e > 0. On the other hand, Equation 5.52 suggeststhat
)z should not exceed C (since both pri and )a are non-negative). Therefore,
the Lagrange multipliers for nonlinearly separable data are restricted to 0 (
\r<C.
The dual problem can then be solved numerically using quadratic pro-
gramming techniques to obtain the Lagrange multipliers .\a. These multipliers
can be replaced into Equation 5.50 and the KKT conditions to obtain the
parameters of the decision boundary.
The SVM formulations described in the previous sections construct a linear de-
cision boundary to separatethe training examplesinto their respectiveclasses.
This section presents a methodology for applying SVM to data sets that have
nonlinear decision boundaries. The trick here is to transform the data from its
original coordinate space in x into a new space O(x) so that a linear decision
boundary can be used to separate the instances in the transformed space. Af-
ter doing the transformation, we can apply the methodology presented in the
previous sections to find a linear decision boundary in the transformed space.
Attribute TYansformation
lfM)0.2,
a ( r t , r z ): otherwise.
(5.54)
{1,
The decision boundary for the data can therefore be written as follows:
M-0.2,
which can be further simplified into the following quadratic equation:
*?-q+"3-rz:-0.46.
o.o Support Vector Machine (SVM) 27L
tr
-{1 d
tr
tr
I -0 15
tr
tr
&\'
x?-x,
Figure5.28.Classifying
datawitha nonlinear
decision
boundary.
In the transformed space, we can find the parameters w : (t10, lrrt ..., w+)
such that:
w4r21t .s*f + uz{2q + wrrtr2t tr.rs: fl.
For illustration purposes, let us plot the graph of r/ - t2 versus rl - q for
the previously given instances. Figure 5.28(b) shows that in the transformed
space,all the circles are located in the lower right-hand side of the diagram. A
Iinear decisionboundary can therefore be constructed to separatethe instances
into their respective classes.
One potential problem with this approach is that it may suffer from the
curse of dimensionality problem often associatedwith high-dimensional data.
We will show how nonlinear SVM avoids this problem (using a method known
as the kernel trick) later in this section.
-,n ll:Yll'
w2
Note the similarity between the learning task of a nonlinear SVM to that
of a linear SVM (seeDefinition 5.1 on page 262). The main difference is that,
instead of using the original attributes x, the learning task is performed on the
transformed attributes O(x). Following the approach taken in Sections 5.5.2
and 5.5.3 for linear SVM, we may derive the following dual Lagrangian for the
constrained optimization problem:
LD :1^, - . a(xr)
),;Aisiyie(xz) (5.56)
I f
zrJ
Once the );'s are found using quadratic programming techniques, the param-
eters w and b can be derived using the following equations:
-: )'iY';Q(x) l o . o/ /
T
+ b)- 1}:0,
r,;{s,,(I\1y1a(x).o(*r) (5.58)
J
5.5 Support Vector Machine (SVM) 273
which are analogousto Equations 5.39 and 5.40 for linear SVM. Finally, a test
instance z canbe classifiedusing the following equation:
Except for Equation 5.57, note that the rest of the computations (Equa-
tions 5.58 and 5.59) involve calculating the dot product (i.e., similarity) be-
tween pairs of vectors in the transformed space,O(*r) 'O("i). Such computa-
tion can be quite cumbersome and may suffer from the curse of dimensionality
problem. A breakthrough solution to this problem comes in the form ,rf a
method known as the kernel trick.
I(ernel TYick
This analysis shows that the dot product in the transformed space can be
expressedin terms of a similarity function in the original space:
K ( u , v ) : O ( u ). O ( " ) : ( u . v + 1 ) 2 (5.61)
rL
. ,\-, .a@) + b)
f (z) : sisn(\\siQ(xi)
t:I
n
: . ,\.-.
sign(\),igiK(xi.z) + b)
;-1
n
.
k A i \ x i ' z + I ) ' + 0" ) ,
z\- ' , .',
: slgn\) (5.62)
1,: I
Mercerts Theorem
The main requirement for the kernel function used in nonlinear SVM is that
there must exist a corresponding transformation such that the kernel function
computed for a pair of vectors is equivalent to the dot product between the
vectors in the transformed space. This requirement can be formally stated in
the form of Mercer's theorem.
1
tr
0.9
tr
tr
0.8
o.7 try
0.6
x u,c
0.4
0.3
0.2
tr
0.1 tr
tr
0
o.7
xl
Figure5.29.Decision produced
boundary SVMwithpolynomial
bya nonlinear kernel.
Kernel functions that satisfy Theorem 5.1 are called positive definite kernel
functions. Examples of such functions are listed below:
f
+ l)ee(x)
s(Y)dxdY
J 8'Y
P /\
"
: / t (l )t*.y)i g (*)g (y)d .xd,y
J 7_o\x/'
:\-\-
LL (,)(-,:, '?"7' s(rt'r2' )d'r1d'"
i:O at,az,... )[ l ]'
4. The SVM formulation presented in this chapter is for binary class prob-
lems. Some of the methods available to extend SVM to multiclass orob-
lems are presented in Section 5.8.
set of base classifiers from training data and performs classificationby taking
a vote on the predictions made by each base classifier. This section explains
why ensemble methods tend to perform better than any single classifier and
presents techniques for constructing the classifier ensemble.
vensemDle
O a 1 -
- - e)25-t: 0'06, (5.66)
which is considerably lower than the error rate of the base classifiers. r
Figure 5.30 shows the error rate of an ensemble of twenty-five binary clas-
sifiers (e".,""-bre) for different base classifier error rates (e). The diagonal line
represents the case in which the base classifiersare identical, while the solid
line representsthe casein which the base classifiersare independent. Observe
that the ensemble classifier performs worse than the base classifiers when e is
larger than 0.5.
The preceding example illustrates two necessary conditions for an ensem-
ble classifier to perform better than a single classifier: (1) the base classifiers
should be independent ofeach other, and (2) the base classifiersshould do bet-
ter than a classifierthat performs random guessing. In practice, it is difficult to
ensure total independenceamong the base classifiers. Nevertheless,improve-
ments in classification accuracieshave been observed in ensemblemethods in
which the base classifiers are slightly correlated.
278 Chapter 5 Classification: Alternative Techniques
0.9
6
o
0.7
o u.o
a
o
o
o
o.4
E
o
@ UJ
c
uJ
0.2
0.1
Base classifiererror
Figure
5.30.Comparison
between
enors
ofbase
classifiers
anderrors
oftheensemble
classifier.
Step 1:
CreateMultiple
Data Sets
Step 2:
BuildMultiple
Classifiers
Step 3:
Combine
Classifiers
Figure5.31.A logicalview
oftheensemble
learning
method.
3 . By manipulating the class labels. This method can be used when the
number of classesis sufficiently large. The training data is transformed
into a binary class problem by randomly partitioning the class la,bels
into two disjoint subsets, Ag and A1. TYaining examples whose class
label belongs to the subset As are assignedto class 0, while those that
belong to the subset Al are assignedto class 1. The relabeled examples
are then used to train a base classifier. By repeating the class-relabeling
and model-building steps multiple times, an ensembleof base classifiers
is obtained. When a test example is presented,each base classifiet Ci is
used to predict its class label. If the test example is predicted as class
0, then all the classesthat belong to As will receive a vote. Conversely'
if it is predicted to be class 1, then all the classesthat belong to A1
will receive a vote. The votes are tallied and the class that receivesthe
highest vote is assignedto the test example. An example of this approach
is the error-correcting output coding method describedon page 307.
The first three approachesare generic methods that are applicable to any
classifiers,whereasthe fourth approach depends on the type of classifierused.
The base classifiersfor most of these approachescan be generated sequentially
(one after another) or in parallel (all at once). Algorithm 5.5 shows the steps
needed to build an ensembleclassifier in a sequential manner. The first step
is to create a training set from the original data D. Depending on the type
of ensemble method used, the training sets are either identical to or slight
modifications of D. The size of the training set is often kept the same as the
original data, but the distribution of examples may not be identicall i.e., some
examples may appear multiple times in the training set, while others may not
appear even once. A base classifier Ci is then constructed from each training
set Da. Ensemble methods work better with unstable classifiers. i.e.. base
classifiersthat are sensitive to minor perturbations in the training set. Ex-
amples of unstable classifiersinclude decision trees, rule-based classifiers,and
artificial neural networks. As will be discussedin Section 5.6.3, the variability
among training examples is one of the primary sourcesof errors in a classifier.
By aggregating the base classifiersbuilt from different training sets, this may
help to reduce such types of errors.
Finally, a test example x is classified by combining the predictions made
by the base classifiersC;(x):
The class can be obtained by taking a majority vote on the individual predic-
tions or by weighting each prediction with the accuracv of the base classifier.
where / refers to the amount of force applied and 0 is the angle of the launcher.
The task of predicting the class label of a given example can be analyzed
using the same apploach. For a given classifier, some predictions may turn out
to be correct, while others may be completely off the mark. We can decompose
the expected error of a classifieras a sum of the three terms given in Equation
5.67, where expected error is the probability that the classifier misclassifiesa
{----} <-->
'Variance' 'Noise'
\_
'Bias'
Figure decomposition.
5.32.Bias-variance
282 Chapter 5 Classification: Alternative Techniques
given example. The remainder of this section examines the meaning of bias,
variance, and noise in the context of classification.
A classifier is usually trained to minimize its training error. However, to
be useful, the classifier must be abie to make an informed guess about the
class labels of examples it has never seen before. This requires the classifier to
generalizeits decision boundary to regions where there are no training exam-
ples available--a decision that depends on the design choice of the classifier.
For example, a key design issue in decision tree induction is the amount of
pruning needed to obtain a tree with low expected error. Figure 5.33 shows
two decision trees, ft and 72, that are generated from the same training data,
but have different complexities. 7z is obtained by pruning ?r until a tree with
maximum depth of two is obtained. Ty, on the other hand, performs very little
pruning on its decision tree. These design choices will introduce a bias into
the classifierthat is analogousto the bias of the projectile launcher described
in the previous example. In general, the stronger the assumptions made by
a classifier about the nature of its decision boundary, the larger the classi-
fier's bias will be. 72 therefore has a larger bias because it makes stronger
assumptions about its decision boundary (which is reflected by the size of the
tree) compared to ?r. other design choicesthat may introduce a bias into a
classifier include the network topology of an artificial neural network and the
number of neighbors consideredby a nearest-neighborclassifier.
The expected error of a classifier is also affected by variability in the train-
ing data becausedifferent compositions of the training set may lead to differ-
ent decision boundaries. This is analogous to the variance in r when different
amounts of force are applied to the projectile. The last component of the ex-
pected error is associatedwith the intrinsic noise in the target class. The target
class for some domains can be non-deterministic; i.e., instanceswith the same
attribute values can have different class labels. Such errors are unavoidable
even when the true decision boundary is known.
The amount of bias and variance contributing to the expected error depend
on the type of classifier used. Figure 5.34 compares the decision boundaries
produced by a decision tree and a 1-nearest neighbor classifier. For each
classifier, we plot the decision boundary obtained by "averaging" the models
induced from 100 training sets, each containing 100 examples. The true deci-
sion boundary from which the data is generated is also plotted using a dashed
line. The difference between the true decision boundary and the "averaged"
decision boundary reflects the bias of the classifier. After averaging the mod-
els, observe that the difference between the true decision boundary and the
decisionboundary produced by the l-nearest neighbor classifieris smaller than
5.6 EnsembleMethods 283
15
d oo oo
o
x1 <-1.24 x1< 11.00 ooo
o oooo 3
o _of,;
)<2<9.25 o t f f i '*+
o o l *+ + +
o
o
-' + * + + *
o ++
*4
o + +*
+ + *****
O+ o
-5 10 15
(a) Decisiontree Tj
15
d o oo o oi
o
ooo
10 o oooo 3
o + ++
8 +
-^ oo + **oo
+ +
x1 <-1.24 x1< 11.00 o
o
+ +
o *a
* tl+*+ * +*
+O o t a
+ J**
o
(b) Decisiontree T2 -5 10 15
treeswithdifferent
Figure5.33.Twodecision complexities fromthesametraining
induced data.
the observed difference for a decision tree classifier. This result suggests that
the bias of a l-nearest neighbor classifier is lower than the bias of a decision
tree classifier.
On the other hand, the l-nearest neighbor classifier is more sensitive to
the composition of its training examples. If we examine the models induced
from different training sets, there is more variability in the decision boundary
of a l-nearest neighbor classifier than a decision tree classifier. Therefore, the
decision boundary of a decision tree classifier has a lower variance than the
l-nearest neighbor classifier.
5.6.4 Bagging
+ .
.,+ + r++
:.. {o
* ***** * #..*
T
' \*
... ** #"3 f ++, ' , ' oo
*H o T+
J+ r.,.
ii* ..
+ ji
o
- - - l )
,/ "":
"o
' ' I
+T
+
'
o
t ".. ,r+ -
?" ooo^
*Ja
o H ^ oo o o-
,-tjf".= .,
+
+ O
' 9O O OO
orifJe"" o".pdg o oo.p d
I
o o^o n o.'
a_'
o ^t o
oo(g ^
-30 L "O orc)
-30 +0L
-10
(a) Decision boundary for decision tree. (b) Decision boundary for l-nearest
neighbor.
Figure
5.34.Biasofdecision
treeand1-nearest
neighbor
classifiers.
mately 63% of the original training data becauseeach sample has a probability
1 - (1 - 1/l/)r' of being selected in each D,i.. If l/ is sufficiently large, this
probability convergesto 1- Ll"- 0.632. The basic procedure for bagging is
summarized in Algorithm 5.6. After training the k classifiers,a test instance
is assignedto the class that receivesthe highest number of votes.
To illustrate how bagging works, consider the data set shown in Table 5.4.
Let r denote a one-dimensionalattribute and y denote the classlabel. Suppose
we apply a classifier that induces only one-level binary decision trees, with a
test condition r ( k, where k is a split point chosen to minimize the entropy
of the leaf nodes. Such a tree is also known as a decision stump.
Without bagging, the best decisionstump we can produce splits the records
at either z < 0.35 or r I 0.75. Either way, the accuracy of the tree is at
5.6 Ensemble Methods 285
most 70%. Suppose we apply the bagging procedure on the data set using
ten bootstrap samples. The examples chosen for training in each bagging
round are shown in Figure 5.35. On the right-hand side of each table, we also
illustrate the decision boundary produced by the classifier.
We classify the entire data set given in Table 5.4 by taking a majority
vote among the predictions made by each base classifier. The results of the
predictions are shown in Figure 5.36. Since the class labels are either -1 or
*1, taking the majority vote is equivalent to summing up the predicted values
of gr and examining the sign of the resulting sum (refer to the second to last
row in Figure 5.36). Notice that the ensembleclassifier perfectly classifiesall
ten examples in the original data.
The preceding example illustrates another advantage of using ensemble
methods in terms of enhancing the representation of the target function. Ilven
though each base classifier is a decision stump, combining the classifierscan
Iead to a decision tree of depth 2.
Bagging improves generalization error by reducing the variance of the base
classifiers. The performance of bagging depends on the stability of the base
classifier. If a base classifier is unstable, bagging helps to reduce the errors
associatedwith random fluctuations in the training data. If a base classifier
is stable, i.e., robust to minor perturbations in the training set, then the
error of the ensemble is primarily caused by bias in the base classifier. In
this situation, bagging may not be able to improve the performance of the
base classifiers significantly. It may even degrade the classifier's performance
becausethe effective size of each training set is about 37% smaller than the
original data.
Finally, since every sample has an equal probability of being selected, bag-
ging does not focus on any particular instance of the training data. It is
therefore less susceptible to model overfitting when applied to noisy data.
5.6.5 Boosting
Boosting is an iterative procedure used to adaptively change the distribution
of training examples so that the base classifierswill focus on examples that
are hard to classify. Unlike bagging, boosting assignsa weight to each training
286 Chapter 5 Classification: Alternative Techniques
Round1:
x 1 0 . 1 o.2 o.2 0.3 0.4 o.4 0.5 0.6 0.9 0.9 x<=0.35-=>y=1
v 1 1 1 1 -1 -1 -l 1 1 1 x>0.35==>y=-1
Round2:
x 1 0 . 1 o.2 0.3 0.4 0.5 0.8 0.9 1 1 1 x < = 0 . 6 5 = = 1y = |
v 1 1 1 -1 -1 1 1 1 1 1 x>0.65-=>y=1
Round3:
x I0.1 o.2 0.3 o.4 o.4 0.5 0.7 o.7 0.8 0.9 x < = 0 . 3 5 = = y y =|
v 1 1 1 -1 -1 -1 1 1 x>0.35==yy=-l
Round4:
x 1 0 . 1 0.1 o.2 0.4 o.4 0.5 0.5 0.7 0.8 0.9 x < = 0 . 3 = = 1y = l
v 1 1 1 -1 -1 1 -1 1 1 1 x v Q . t = = 1y = - 1
Bagging Round 5:
x 0.1 0.1 0.2 0.5 0.6 0.6 0.6 1 1 x<=0.35==>y=l
v 1 1 1 1 -1 I 1 1 1 x>0.35==>y=-1
Round6:
x o.2 0.4 0.5 0.6 0.7 o.7 o.7 0.8 0.9 1 x <= 0.75 ==> y - -1
v 1 1 1 1 1 1 1 1 1 x>0.75==>y=1
Round7:
x 0.1 o.4 o.4 0.6 0.7 0.8 0.9 0.9 0.9 1 x <= 0.75 ==> y = -1
v 1 -1 1 1 1 1 1 1 1 1 x>0.75==>y=1
Round8:
x 0.1 o.2 0.5 0.5 0.5 0.7 o.7 0.8 0.9 1 x <= 0.75 ==> y = -1
v 1 1 1 -1 -1 -t 1 1 1 1 x>0.75==2y=l
Round
x 0.1 0.3 0.4 o.4 0.6 o.7 o.7 0.8 1 1 x <= O.75==> y - -1
v 1 1 1 1 1 1 1 x>0.75==>y=-t
Round10:
x 0.1 0.1 0.1 0.1 0.3 0.3 0.8 x <= 0.05 ==> y = -1
0.8 0.9 0.9
v 1 x>0.05==>y=1
1 1 1 1 1 I 1 1
Figure5.35.Example
ofbagging.
example and may adaptively change the weight at the end of each boosting
round. The weights assigned to the training examples can be used in the
following ways:
2. They can be used by the base classifier to learn a model that is biased
toward higher-weight examples.
5.6 Ensemble Methods 287
4 1 1 1 1 1 1 1 -1 1 -1
5 1 1 1 1 1 -1 1 1 1
-1 -1 -1 -1 ,l
6 1 1 1 1 1
'l -1 -1 1 1 1 1 1 1
I -1 -1 -1 1 1 1 -1 1 1 1
I 1 1 1 1 1 1 -1 1 1 1
'l
10 1 1 1 t
1 1 1 1 1
Sum 2 2 2 -o -6 -6 -6 2 2 2
Sign 1 1 1 1 1 1 1 1 1
TrueClass 1 1 1 1 1 1 1 1 1 1
of combining
Figure5.36.Example classifiers approach.
usingthebagging
constructed
Initially, all the examples are assigned the same weights. However, some ex-
amples may be chosen more than once, e.g., examples 3 and 7, becausethe
sampling is done with replacement. A classifier built from the data is then
used to classify all the examples. Suppose example 4 is difficult to classify.
The weight for this example will be increased in future iterations as it gets
misclassified repeatedly. Meanwhile, examples that were not chosen in the pre-
288 Chapter 5 Classification: Alternative Techniques
vious round, e.g., examples 1 and 5, also have a better chanceof being selected
in the next round since their predictions in the previous round were Iikely to
be wrong. As the boosting rounds proceed, examples that are the hardest to
classify tend to become even more prevalent. The final ensemble is obtained
by aggregating the base classifiersobtained from each boosting round.
Over the years, several implementations of the boosting algorithm have
been developed. These algorithms differ in terms of (1) how the weights of
the training examples are updated at the end of each boosting round, and (2)
how the predictions made by each classifierare combined. An implementation
called AdaBoost is explored in the next section.
AdaBoost
# tf '1r(co61t
+r,)], (5.68)
1, /1- er\
ai:rt"\
* /
Note that a; has a large positive value if the error rate is closeto 0 and a large
negative value if the error rate is close to 1, as shown in Figure 5.37.
The a6 parameter is also used to update the weight of the training ex-
amples. To illustrate,let w[i) denote the weight assignedto example (*t,A)
during the jth boosting round. The weight update mechanism for AdaBoost
is given by the equation:
'[i) If Ci(x) : y,
.U+r1
' : * lexp-".r (5.6e)
z.i lexpo: i f C i @ ) I y i '
el
E-1
-4
o.4
E
€ensemble (5.70)
29O Chapter 5 Classification: Alternative Techniques
where e,;is the error rate of each base classifier i. If the error rate of the base
classifier is less than 50%, we can write e; : 0.5 - li, where l; fir€asur€show
much better the classifieris than random guessing. The bound on the training
error of the ensemblebecomes
€ensemble (
T Uf - +":( exp(_ r."r) ( 5.7r )
If m < 7x for all i's, then the training error of the ensembledecreasesexpo-
nentially, which leads to the fast convergence of the algorithm. Nevertheless,
becauseof its tendency to focus on training examples that are wrongly classi-
fied, the boosting technique can be quite susceptible to overfitting.
Boosting
Round1:
x 0.1 0.4 0.5 0.6 0.6 o.7 0.7 0.7 0.8 1
v 1 1 -1 -1 -1 -1 -1 -1 1 1
Boosting
Round2:
x 0.1 0.1 0.2 0.2 0.2 0.2 0.3 0.3 0.3 0.3
v 1 1 1 1 1 1 1 1 1 1
BoostingRound3:
x o.2 0.2 0.4 0.4 0.4 0.4 0.5 0.6 0.6 0.7
v 1 1 -1 1 -1 -1 -1 -1 -1 -1
(a)Training
recordschosenduringboosting
Round x=0.1 x=0.2 x=0,3 x=0,4 x=0.5 x=0.6 x=0.7 x=0,8 x=0.9 x=1,0
1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1
2 0 . 3 1 1 0.3't'l 0 . 3 t1 0 . 0 1 0 . 0 1 0 . 0 1 0.01 0 . 0 1 0.01 0.01
J 0.029 0.029 0.029 0.228 0.228 0.228 0.228 0.009 0.009 0.009
(b)Weightsof trainingrecords
Figure5.38.Example
of boosting.
where p is the average correlation among the trees and s is a quantity that
measuresthe "strength" of the tree classifiers. The strength of a set of classi-
fiers refers to the averageperformance of the classifiers,where performance is
measured probabilistically in terms of the classifier'smargin:
Round x=0.1 x=0.2 x=0.3 x--0.4 x=0.5 x=0.6 x=0.7 x=0,8 x=0.9 x=l.0
1 1 1 1 1 1 -1 1 1 I
2 1 1 1 1 1 I 1 1
.t 1 1 1 1 1 1 1 1 1 1
Sum 5 . 1 6 5 . 1 6 5 . 1 6 -3.08 -3.08 -3.08 -3.08 0.397 0.397 0.397
Sign 1 1 1 1 1 1 1 1 1
(b)
5,39.Example
Figure olcombining
classifiers
constructed theAdaBoost
using approach.
Original
Trainingdata
Step2:
Userandom
vectorto
buildmultiple
ffi
+ +
decisiontrees
(c
Step3:
Combine
decisiontrees
Figure5.40.Random
forests.
Each decision tree uses a random vector that is generated from some fixed
probability distribution. A random vector can be incorporated into the t,ree-
growing process in many ways. The first approach is to randomly select F
input features to split at each node of the decision tree. As a result, instead of
examining all the available features, the decision to split a node is determined
from these selectedf. features. The tree is then grown to its entirety without
any pruning. This may help reduce the bias present in the resulting 1;ree.
Once the trees have been constructed, the predictions are combined using a
majority voting scheme. This approach is known as Forest-Rl, where RI refers
to random input selection. To increaserandomness,bagging can also be used
to generate bootstrap samples for Forest-Rl The strength and correlation of
random forests may depend on the size of F. If F is sufficiently small, then
the trees tend to become less correlated. On the other hand, the strength of
the tree classifier tends to improve with a larger number of features, F. As
a tradeoff, the number of features is commonly chosen to be F - logz d + L,
where d is the number of input features. Since only a subset of the features
needsto be examined at each node, this approach helps to significantly re<luce
the runtime of the algorithm.
If the number of original features d is too small, then it is difficult to choose
an independent set of random features for building the decision trees. One
way to increase the feature space is to create linear combinations of the input
features. Specifically, at each node, a new feature is generated by randomly
selecting .t of the input features. The input features are linearly combined
using coefficients generated from a uniform distribution in the range of [-1,
1]. At each node, F of such randomly combined new features are generated,
and the best of them is subsequentlyselectedto split the node. This approach
is known as Forest-RC.
A third approach for generating the random trees is to randomly select
one of the f' best splits at each node of the decision tree. This approach may
potentially generatetrees that are more correlated than Forest-Rl and Fonest-
RC, unless -F is sufficiently large. It also does not have the runtime savings of
Forest-Rl and Forest-RC because the algorithm must examine all the splitting
features at each node of the decision tree.
It has been shown empirically that the classification accuraciesof random
forests are quite comparable to the AdaBoost algorithm. It is also more robust
to noise and runs much faster than the AdaBoost algorithm. The classification
accuracies of various ensemble algorithms are compared in the next section.
294 Chapter 5 Classification: Alternative Techniques
Table
5.5.Comparing ofa decision
theaccuracy treeclassifier three
against methods.
ensemble
Data Set Number of Decision Bagging Boosting RF
(Attributes, Classes, Tlee (%) (%) (%) (%)
Records)
Anneal ( 3 9 ,6 , 8 9 8 ) 92.09 94.43 95.43 95.43
Australia ( 1 5 ,2 , 6 9 0 ) 85.51 87.10 85.22 85.80
Auto (26, 7, 205) 81.95 85.37 85.37 84.39
Breast ( 1 1 ,2 , 6 9 9 ) 95.r4 96.42 a7 2F, 96.14
Cleve (14,2, 303) 76.24 81.52 82.18 82.18
Credit (16,2, 690) 85.8 86.23 86.09 85.8
Diabetes (9,2,769) 72.40 76.30 73.18 ro.ro
German (21,2, 1000) 70.90 73.40 73.00 74.5
a, <n
Glass (r0, 7, 2L4) 67.29 76.r7 78.04
Heart \14,2,270) 80.00 81.48 80.74 83.33
Hepatitis (20,2, L55) 81.94 81.29 83.87 83.23
Horse (23, 2, 369) 85.33 85.87 8r.25 85.33
Ionosphere (35, 2, 351) 89.L7 92.02 93.73 93.45
Iris (5,3,150) 94.67 94.67 94,00 93.33
Labor (17, 2, 57) 78.95 84.2r 89,47 84.21
LedT (8, 10, 3200) 73.34 73.66 73.34 73.06
Lymphography ( 1 9 ,4 , 1 4 8 ) 77.03 79.05 85.14 82.43
Pima (9,2,769) 74.35 76.69 73,44 77.60
Sonar (6r, 2,208) 78.85 78.85 84.62 db.bd
Tic-tac-toe (10, 2, g5g) 83.72 93.84 98.54 95.82
Vehicle (r9, 4,846) 7r.04 74.rr 78.25 74.94
Waveform (22,3, 5000) 76.44 83.30 83.90 84.04
Wine (14, 3, 178) 94.38 96.07 97.75 97.75
Zoo (r7,7, Lol) 93.07 93.07 95.05 97.03
Table 5.5 shows the empirical results obtained when comparing the perfor-
mance of a decision tree classifier against bagging, boosting, and random for-
est. The base classifiersused in each ensemblemethod consist of fifty decision
trees. The classification accuraciesreported in this table are obtained from
ten-fold cross-validation. Notice that the ensemble cla,ssifi.ersgenerally out-
perform a single decision tree classifier on many of the data sets.
Table matrix
5.6.A confusion fora binary problem
classification inwhich arenotequally
theclasses
imoortant.
Predicted Class
-f
denoted as the negative class. A confusion matrix that summarizes the number
of instancespredicted correctly or incorrectly by a classificationmodel is shown
in Table 5.6.
The following terminology is often used when referring to the counts tab-
ulated in a confusion matrix:
o TYuepositive (TP) or fi1, which correspondsto the number of positive
examples correctly predicted by the classification model.
TPR:TPIQP+r'N).
Finally, the false positive rate (FPR) is the fraction of negative examples
predicted as a positive class, i.e.,
Recall and precision are two widely used metrics employed in applica-
tions where successfuldetection of one of the classesis consideredmore signif-
icant than detection of the other classes.A formal definition of these met,rics
is given below.
TP
Precision,p: (5.74)
TP+FP
TP
Recall, r : (5.75)
TP+FN
,
al -
1 1.
III
i- p
The harmonic mean of two numbers z and grtends to be closer to the smaller
of the two numbers. Hence, a high value of F1-measure ensures that both
298 Chapter 5 Classification: Alternative Techniques
precision and recall are reasonably high. A comparison among harmonic, ge-
ometric, and arithmetic means is given in the next example.
Example 5.8. Consider two positive numbers a,: 1 and b: 5. Their arith-
m e t i c m e a ni s L r o : ( a + b ) 1 2 : 3 a n d t h e i r g e o m e t r i cm e a n i s F g : \ / o b :
2.236. Their harmonic mean is p,h: (2xlx5) 16 : L.667,which is closerto the
smaller value between o and b than the arithmetic and geometric means. r
More generally, the FB measure can be used to examine the tradeoff be-
tween recall and precision:
D
/ ^t
\lt- + tlrp
a\
(P'+r) xTP
,^q
(5.77)
r+p-p @2+t)rp+p2FP+Fr\/
Both precision and recall are special casesof FB by setting 0 :0 and B : 66,
respectively. Low values of B make Fp closer to precision, and high values
make it closer to recall.
A more general metric that captures .F-Bas well as accuracy is the weighted
accuracy measure,which is defined by the following equation:
wtTP -f utTN
Weighted &ccltro,c/: (5.78)
utTP * utzFP + unF N + u)4TN'
0.9 Mz.'
4
o.7
o
(d
E u.o
c) M1
o u.c
o
(L
c)
5 0.4
[,/
I
,/, ,,
,/ .'
0.1
l'
0
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
FalsePositiveRate
Figure
5.41.ROCcurves
fortwodifferent
classifiers.
There are several critical points along an ROC curve that have well-known
interpretations:
(TPR:O, FPR:0): Model predicts every instance to be a negative class.
(TPR:l, FPR:I): Model predicts every instance to be a positive class.
(TPR:l, FPR:O): The ideal model.
than 0.36, while Mz is superior when FPIB is greater than 0.36. Clearly,
neither of these two classifiersdominates the other.
The area under the ROC curve (AUC) provides another approach for eval-
uating which model is better on average. If the model is perfect, then its area
under the ROC curve would equal 1. If the model simply performs random
guessing,then its area under the ROC curve would equal 0.5. A model that
is strictly better than another would have a larger area under the ROC curve.
2 . Select the lowest ranked test record (i.e., the record with lowest output
value). Assign the selected record and those ranked above it to the
positive class. This approach is equivalent to classifying all the test
records as positive class. Because all the positive examples are classified
correctly and the negative examples are misclassified,TPR: FPR: I.
3 . Select the next test record from the sorted list. Classify the selected
record and those ranked above it as positive, while those ranked below it
as negative. Update the counts of TP and FP by examining the actual
class label of the previously selected record. If the previously selected
record is a positive class, the TP count is decremented and the FP
count remains the same as before. If the previously selectedrecord is a
negative class, the FP count is decremented and TP cotnt remains the
same as before.
4. Repeat Step 3 and update theTP and FP counts accordingly until the
highest ranked test record is selected.
Figure 5.42 shows an example of how to compute the ROC curve. There
are five positive examples and five negative examples in the test set. The class
5.7 ClassImbalanceProblem 301-
5 4 4 o J 3 2 2 1 0
'I
5 c 4 4 2 0 0 0
TN 0 0 1 1 J 4 4 5 5
FN 0 1 1 2 2 2 3 3 4 5
TPR I 0.8 0.8 u.o 0.6 0.6 0.6 o.4 o.4 0.2 0
'I
FPR 1 0.8 0.8 06 o.4 o.2 o.2 0 0 0
Figure5.42.
Constructing
anROCcurve.
Figure5,43.BOCcurveforthedatashownin Figure
5.42.
Iabels of the test records are shown in the first row of the table. The second row
corresponds to the sorted output values for each record. For example, they
may correspond to the posterior probabilities P(*lx) generated by a naive
Bayes classifier. The next six rows contain the counts of.TP, FP,TN, and
f.ly', along with their corresponding TPR and FPR. The table is then filled
from left to right. Initially, all the records are predicted to be positive. Thus,
TP : FP :5 and TPR : FPR : 1. Next, we assign the test record with
the lowest output value as the negative class. Becausethe selected record is
actually a positive example, the TP count reduces from 5 to 4 and the FP
count is the same as before. The FPR and TPR are updated accordingly.
This process is repeated until we reach the end of the list, where TPR : 0
and FPR:0. The ROC curve for this example is shown in Figure 5.43.
3O2 Chapter 5 Classification: Alternative Techniques
U n d e rt h e 0 / 1 c o s t m a t r i x , i . e . , C ( + , * ) : C ( - , - ) : 0 and C(+,,-1 :
C(-, +) : 1, it can be shownthat the overallcostis equivalentto the number
of misclassification errors.
C t ( M ) : 1 5 0x ( - 1 ) * 6 0 x 1 + 4 0 x 1 0 0: 3 9 1 0 ,
Ct(Mz) : 250x (-1) * 5 x 1 -f 45 x 100: 4255.
Table5,7.Costmatrix
forExample
5.9.
Predicted Class
Ulass : * Class : -
Actual Class : f -l 100
Class UIaSS : - I (.)
5.7 Class Imbalance Problem 3OB
Table5,8. Confusion
matrix
fortwoclassification
models.
Model M1 Predicted Olass Model M2 Predicted Class
UIASS + UIASS - Ulass + Ulass -
Actual Class + 150 40 Actual Class + 250 45
Class UIaSS - 60 250 Class Class- 200
Notice that despite improving both of its true positive and false positive counts,
model Mz is still inferior since the improvement comes at the expense of in-
creasing the more costly false negative errors. A standard accuracy measure
would have preferred model M2 over M1. I
A cost-sensitiveclassification technique takes the cost matrix into consid-
eration during model building and generatesa model that has the lowest cost.
For example, if false negative errors are the most costly, the learning algorithm
will try to reduce these errors by extending its decision boundary toward the
negative class, as shown in Figure 5.44. In this way, the generated model can
cover more positive examples, although at the expense of generating additional
false alarms.
Figure5.44.Modifying
thedecision (from81 to F2)toreduce
boundary thefalsenegative
errorsofa
classifier.
information can be used to: (1) choosethe best attribute to use for splitting
the data, (2) determine whether a subtree should be pruned, (3) manipulate
the weights ofthe training records so that the learning algorithm convergesto
a decision tree that has the lowest cost, and (4) modify the decision rule at
each leaf node. To illustrate the last approach, let p(ilt) denote the fraction of
training records from class i that belong to the leaf node t. A typical decision
rule for a binary classification problem assignsthe positive class to node t if
the following condition holds.
The preceding decision rule suggeststhat the classlabel of a leaf node depends
on the majority class of the training records that reach the particular node.
Note that this rule assumesthat the misclassification costs are identical for
both positive and negative examples. This decision rule is equivalent to the
expressiongiven in Equation 4.8 on page 165.
Instead of taking a majority vote, a cost-sensitive algorithm assigns the
class label e to node t if it minimizes the following expression:
X1 X
Figute5.45.lllustrating
theeffectofoversampling
oftherareclass.
Sampling is another widely used approach for handling the class imbalance
problem. The idea of sampling is to modify the distribution of instances so
that the rare classis well representedin the training set. Some of the available
techniques for sampling include undersampling, oversampling, and a hybrid
of both approaches. To illustrate these techniques, consider a data set that
contains 100 positive examples and 1000 negative examples.
In the case of undersampling, a random sample of 100 negative examples
is chosen to form the training set along with all the positive examples. One
potential problem with this approach is that some of the useful negative exam-
ples may not be chosenfor training, therefore, resulting in a less than optimal
model. A potential method to overcomethis problem is to perform undersam-
pling multiple times and to induce multiple classifierssimilar to the ensemble
Iearning approach. Focused undersampling methods may also be used, where
the sampling procedure makes an informed choice with regard to the nega-
tive examples that should be eliminated, e.g., those located far away from the
decision boundary.
Oversampling replicates the positive examplesuntil the training set has an
equal number of positive and negative examples. Figure 5.45 illustrates the
effect of oversamplingon the construction of a decisionboundary using a classi-
fier such as a decision tree. Without oversampling, only the positive examples
at the bottom right-hand side of Figure 5.45(a) are classified correctly. The
positive example in the middle of the diagram is misclassifiedbecause there
306 Chapter 5 Classification: Alternative Techniques
are not enough examples to justify the creation of a new decision boundary
to separate the positive and negative instances. Oversampling provides the
additional examples neededto ensure that the decision boundary surrounding
the positive example is not pruned, as illustrated in Figure 5.45(b).
However, for noisy data, oversampling may causemodel overfitting because
some of the noise examples may be replicated many times. In principle, over*
sampling does not add any new information into the training set. Replication
of positive examplesonly prevents the learning algorithm from pruning certain
parts of the model that describe regions that contain very few training exam-
ples (i.e., the small disjuncts). The additional positive examples also tend to
increasethe computation time for model building.
The hybrid approach uses a combination of undersampling the majority
class and oversampling the rare class to achieve uniform class distribution.
Undersampling can be performed using random or focusedsubsampling. Over-
sampling, on the other hand, can be done by replicating the existing positive
examples or generating new positive examples in the neighborhood of the ex-
isting positive examples. In the latter approach, we must first determine the
k-nearest neighbors for each existing positive example. A new positive ex-
ample is then generated at some random point along the line segment that
joins the positive example to one of its k-nearest neighbors. This process is
repeated until the desired number of positive examples is reached. Unlike the
data replication approach, the new examples allow us to extend the decision
boundary for the positive classoutward, similar to the approach shown in Fig'
ure 5.44. Nevertheless,this approach may still be quite susceptible to model
overfitting.
The first two rows in this table correspond to the pair of classes(AtAi) chosen
to build the classifierand the last row representsthe predicted classfor the test
instance. After combining the predictions, 91 and 94 each receive two votes,
while 92 and gr3each receives only one vote. The test instance is therefore
classifiedas either At or A+, depending on the tie-breaking procedure. I
Class Codeword
At 1 1 1 1 1 1 1
Ut 0 0 0 0 1 1 1
UI 0 0 1 1 0 0 I
U+ 0 1 0 1 0 I 0
Each bit of the codeword is used to train a binary classifier. If a test instance
is classifiedas (0,1,1,1,1,1,1)by the binary classifiers,then the Hamming dis-
tance between the codeword and gr1is 1, while the Hamming distance to the
remaining classesis 3. The test instance is therefore classifiedas 91. I
binary classifiers. If there is more than one classifier that makes a mistake,
then the ensemblemay not be able to compensatefor the error.
An important issue is how to design the appropriate set of codewords for
different classes. FYom coding theory, a vast number of algorithms have been
developed for generating n-bit codewords with bounded Hamming distance.
However, the discussionof these algorithms is beyond the scope of this book.
It is worthwhile mentioning that there is a significant difference between the
design of error-correcting codes for communication tasks compared to those
used for multiclass learning. For communication, the codewords should max-
imize the Hamming distance between the rows so that error correction can
be performed. Multiclass learning, however, requires that the row-wise and
column-wise distances of the codewords must be well separated. A larger
column-wise distance ensuresthat the binary classifiersare mutually indepen-
dent, which is an important requirement for ensemblelearning methods.
Table ofvarious
5,9.Comparison rule-based
classifiers,
RIPPER u1\ z L]N2 AQR
(unordered) (ordered)
ttule-growlng Generai-to- L;eneral-to- General-tG. General-to-specific
strategy specific specific specific (seeded by a
oositive examole)
Evaluation FOIL's Infb gain Laplace Itntropy ancl Number of
Metric likelihood ratio true positives
Stopping All examples No performance No perfbrmance Kules cover only
condition for belong to the galn galn positive class
rule-erowinq same class
Rule Pruning Keducect None None None
error Drunlnq
lnstance Positive and Positive only Positive only Positive and
Elimination negative negatlve
btopprng tlrlor > bUTo oI I\o perlormance No perlormance All positive
condition for based on MDL garn garn examples are
adding rules covered
Kule uet tfeplace or :itatrstrcal None None
Pruning modifv rules tests
Search stratesv Greedv Beam search Beam search Beam sea,rch
[213]. There are several survey articles on SVM, including those written by
Burges [164],Bennet et al. [158],Hearst 1193],and Mangasarian[205].
A survey of ensemble methods in machine learning was given by Diet-
terich [174]. The bagging method was proposed by Breiman [161]. Reund
and Schapire [186] developed the AdaBoost algorithm. Arcing, which stands
for adaptive resampling and combining, is a variant of the boosting algorithm
proposed by Breiman [162]. It usesthe non-uniform weights assignedto train-
ing examples to resample the data for building an ensembleof training sets.
Unlike AdaBoost, the votes of the base classifiersare not weighted when de-
termining the class label of test examples. The random forest method was
introduced by Breiman in [163].
Related work on mining rare and imbalanced data sets can be found in the
survey papers written by Chawla et al. [166] and Weiss 1220].Sampling-based
methods for mining imbalanced data sets have been investigated by many au-
thors, such as Kubat and Matwin 1202),Japkowitz [196], and Drummond and
Holte [179]. Joshi et al. [199] discussedthe limitations of boosting algorithms
for rare class modeling. Other algorithms developed for mining rare classes
include SMOTE [165],PNrule [198],and CREDOS [200].
Various alternative metrics that are well-suited for class imbalanced prob-
lems are available. The precision, recall, and F1-measureare widely used met-
rics in information retrieval 1216].ROC analysis was originally used in signal
detection theory. Bradley [160] investigated the use of area under the ROC
curve as a performance metric for machine learning algorithms. A method
for comparing classifier performance using the convex hull of ROC curves was
suggested by Provost and Fawcett in [210]. Ferri et al. lf85] developed a
methodology for performing ROC analysis on decision tree classifiers. They
had also proposed a methodology for incorporating area under the ROC curve
(AUC) as the splitting criterion during the tree-growing process. Joshi [197]
examined the performance of these measuresfrom the perspectiveof analyzing
rare classes.
A vast amount of literature on cost-sensitive learning can be found in
the online proceedings of the ICML'2000 Workshop on cost-sensitive learn-
ittg. The properties of a cost matrix had been studied by Elkan in [182].
Margineantu and Dietterich [206] examined various methods for incorporating
cost information into the C4.5 learning algorithm, including wrapper meth-
ods, class distribution-based methods, and loss-basedmethods. Other cost-
sensitive learning methods that are algorithm-independent include AdaCost
[t83], Metacost [177],and costing [222].
3L2 Chapter 5 Classification: Alternative Techniques
Bibliography
[155] D. W. Aha. A studg of instance-based algorithms for superuised learning tasks: mathe-
rnatical, empirical, and, psgchological eualuat'ions. PhD thesis, University of California,
Irvine, 1990.
[156] E. L. Allwein, R. E. Schapire, and Y. Singer. Reducing Multiclass to Binary: A
Unifying Approach to Margin Classifiers. Journal of Machine Learn'ing Research, I:
113-141, 2000.
[157] R, Andrews, J. Diederich, and A. Tickle. A Survey and Critique of Techniques For
Extracting Rules Flom Tbained Artificial Neural Networks. Knowledge Based, Sgstems,
8(6):373-389, 1995.
[158] K. Bennett and C. Campbell. Support Vector Machines: Hype or Hallelujah. SIGKDD
Erp lorat'i,on s, 2 (2) : L-13, 2000.
[159] C. M. Bishop. Neural Networks for Pattern Recognit'ion. Oxford University Press,
Oxford. U.K.. 1995.
[160] A. P. Bradley. The use of the area under the ROC curve in the Evaluation of Machine
Learning Algorithms. P attern Recogni,tion,30(7) : 1145-1 1 49, 1997.
[161] L. Breiman. Bagging Predictors. Mach,ine Lear"ning,24(.2):123 140, 1996.
[162] L. Breiman. Bias, Variance, and Arcing Classifiers. Technical Report 486, University
of California, Berkeley, CA, 1996.
f163] L. Breiman. Random Forests. Mach'ine Learn'ing, 45(I):5-32,2001.
[164] C. J. C. Burges. A Ttrtorial on Support Vector Machines for Pattern Recognition.
D ata Mining and, Knowled,ge Discox erg, 2(2) :t21-167, 1998.
[165] N. V. Chawla, K. W. Bowyer, L. O. Hall, and W. P. Kegelmeyer. SMOTE: Synthetic
Minority Over-sampling Technique. Journal of Artifi,cial Intelli,gence Research, 16:321-
357, 2002.
[166] N. V. Chawla, N. Japkowicz, and A. Kolcz. Editorial: Special Issue on Learning from
Imbalanced Data Sets. SIGKDD Erplorat'ions, 6(1):1-6, 2004.
[167] V. Cherkassky and F. Mulier. Learn'ing frorn Data: Concepts, Theory, and Method,s.
Wiley Interscience, 1998.
[168] P. Clark and R. Boswell. Rule Induction with CN2: Some Recent Improvements. In
Machine Lear"ning: Proc. of the Sth European Conf. (EWSL-97/, pages 151-163, 1991.
[169] P. Clark and T. Niblett. The CN2 Induction Algorithm. Machine Learning, 3(4):
26L-283, 1989.
[170] W. W. Cohen. Fast Effective Rule Induction. In Proc. of the 12th Intl. Conf. on
Mach'ine Leamting, pages 115-123, Tahoe City, CA, July 1995.
[171] S. Cost and S. Salzberg. A Weighted Nearest Neighbor Algorithm for Learning with
Symbolic Features. Mach'ine Learn'ing, 10:57-78, 1993.
ll72l T. M. Cover and P. E. Hart. Nearest Neighbor Pattern Classification. Knowleilge
Baseil Sgstems, 8(6):373-389, 1995.
Bibliography 313
[194] D. Heckerman. Bayesian Networks for Data Mining. Data Mi,ni,ng and Knouledge
Discouerg,1(1):79 LI9, L997.
1195] R. C. Holte. Very Simple Classification Rules Perform Well on Most Commonly Used
Data sets. Machine Learn'ing, 11:63-91, 1993.
[196] N. Japkowicz. The Class Imbalance Problem: Significance and Strategies. In Proc.
of the 2000 Intl. Conf. on Arti,ficial Intelli,gence: Special Tracle on Ind,uct'iue Learning,
volume 1, pages lll-117, Las Vegas, NV, June 2000.
[197] M. V. Joshi. On Evaluating Performance of Classifiers for Rare Classes. In Proc. of
the 2002 IEEE IntI. Conf. on Data Mining, Maebashi City, Japan, December 2002.
1198] M. V. Joshi, R. C. Agarwal, and V. Kumar. Mining Needles in a Haystack: Classifying
Rare Classes via Two-Phase Rule Induction. In Proc. of 2001 ACM-SIGMOD IntI. Conf.
on Managernent of Data, pages 91-102, Santa Barbara, CA, June 2001.
[199] M. V. Joshi, R. C. Agarwal, and V. Kumar. Predicting rare classes: can boosting
make any weak learner strcing? In Proc. of the 8th IntI. Conf. on Knowledge Discouery
and Data Mining, pages 297 306, Edmonton, Canada, JuIy 2002.
1200] M. V. Joshi and V. Kumar. CREDOS: Classification Using Ripple Down Structure
(A Case for Rare Classes). ln Proc. of the SIAM IntI. Conf. on Data Min'ing, pages
32L-332, Orlando, FL, April 2004.
[201] E. B. Kong and T. G. Dietterich. Error-Correcting Output Coding Corrects Bias and
Variance. In Proc. of the 12th IntI. Conf. on Machine Lear"ning, pages 313 321, Tahoe
City, CA, July 1995.
[202] M. Kubat and S. Matwin. Addressing the Curse of Imbalanced Ttaining Sets: One
Sided Selection. In Proc. of the l/+th IntI. Conf. on Machine Lear"n'ing,pages 179-186,
Nashville, TN, July 1997.
[203] P. Langley, W. Iba, and K. Thompson. An analysis of Bayesian classifiers. In Proc. oJ
the 10th National Conf. on Artif,cial Intelligence, pages 223-228, 1992.
[204] D. D. Lewis. Naive Bayes at Forty: The Independence Assumption in Information
Retrieval. In Proc. of the 10th European Conf. on Mach'i,ne Learning (ECML 1998),
pages 4-15, 1998.
[205] O. Mangasarian. Data Mining via Support Vector Machines. Technical Report Tech-
nical Report 01-05, Data Mining Institute, May 2001.
[206] D. D. Margineantu and T. G. Dietterich. Learning Decision Tlees for Loss Minimization
in Multi-Class Problems. Technical Report 99-30-03, Oregon State University, 1999.
1207] R. S. Michalski, I. Mozetic, J. Hong, and N. Lavrac. The Multi-Purpose Incremental
Learning System AQ15 and Its Testing Application to Three Medical Domains. In Proc.
of sth National Conf. on Arti,fici,al Intell'igence, Orlando, August 1986.
[208] T. Mitchell. Machine Learning. McGraw-Hill, Boston, MA, 1997.
f209] S. Muggleton. Found,ati,ons of Inducti,ue Log,ic Programm'ing. Prentice Hall, Englewood
Cliffs. NJ, 1995.
[210] F. J. Provost and T. Fawcett. Analysis and Visualization of Classifier Performance:
Comparison under Imprecise Class and Cost Distributions. In Proc. of the Srd, Intl.
Conf. on Knowled,ge D'iscouery and Data M'in'ing, pages 43-48, Newport Beach, CA,
August 1997.
[211] J. R. Quinlan. CI.S: Programs for Machi,ne Learn'ing. Morgan-Kaufmann Publishers,
San Mateo. CA. 1993.
l2L2l M. Ramoni and P. Sebastiani. Robust Bayes classifierc. Arti.fi,cial Intelligence, I25:
209-226,200r.
5.10 Exercises 31-5
f213] B. Scholkopf and A. J. Smola. Learning with Kernels: Support Vector Machines,
Regularizati,on, Opt'i,mizat'ion, and, Begond. MIT Press, 2001.
[2I4) P. Smyth and R. M. Goodman. An Information Theoretic Approach to Rule Induction
from Databases. IEEE Trans. on Knowled,geand"Data Engi'neering,4(4):301-316, 1992.
[215] D. M. J. Tax and R. P. W. Duin. Using Two-Class Classifiers for Multiclass Classi-
fication. In Proc. of the 16th IntI. Conf. on Pattern Recogn'ition (ICPR 2002), pages
124-127, Quebec, Canada, August 2002.
f216] C. J. van Rijsbergen. Inforrnat'ion Retrieual. Butterworth-Heinemann, Newton, MA,
1978.
[217] V. Vapnik. The Nature of Statistical Learn'ing Theorg. Springer Verlag, New York,
1995.
[218] V. Vapnik. Statistical Learn'ing Theorg. John Wiley & Sons, New York, 1998.
[219] A. R. Webb. Statistical Pattern Recogni.tion.John Wiley & Sons, 2nd edition, 2002.
1220] G. M. Weiss. Mining with Rarity: A Unifying Flamework. SIGKDD Erplorations,6
(l):7-L9,2004.
l22l] I. H. Witten and E. FYank. Data Mining: Pract'ical Machine Learning Tools and
Techn'iqueswith Jaua Implementations. Morgan Kaufmann, 1999.
[222) B. Zadrozny, J. C. Langford, and N. Abe. Cost-Sensitive Learning by Cost-
Proportionate Example Weighting. In Proc. of the 2003 IEEE IntI. Conf. on Data
M'i,n'ing,pages 435-442, Melbourne, FL, August 2003.
5.10 Exercises
1. Consider a binary classification problem with the following set of attributes and
attribute values:
Rt: A ----- C
Rz: AnB------+C
-R2is obtained by adding a new conjunct, B, to the left-hand side of R1. For
this question, you will be asked to determine whether -R2 is preferred over fi1
from the perspectives of rule-growing and rule-pruning. To determine whether
a rule should be pruned, IREP computes the following measure:
urREp:-_fr1,, ,
n-L(N -n)
where P is the total number of positive examples in the validation set, Iy' is
the total number of negative examples in the validation set, p is the number of
positive examples in the validation set covered by the rule, and n is the number
of negative examples in the validation set covered by the rtIe. ulppp is actually
similar to classification accuracy for the validation set. IREP favors rules that
have higher values of u1pBp. On the other hand, RIPPER applies the following
measure to determine whether a rule should be pruned:
u n t p p n n- : ' - =
p+n
(a) Suppose -R1 is covered by 350 positive examples and 150 negative ex-
amples, while R2 is covered by 300 positive examples and 50 negative
examples. Compute the FOIL's information gain for the rule Rz with
respectto R1.
(b) Consider a validation set that contains 500 positive examples and 500
negative examples. For R1, suppose the number of positive examples
covered by the rule is 200, and the number of negative examples covered
by the rule is 50. For rR2,suppose the number of positive examples covered
by the rule is 100 and the number of negative examples is 5. Compute
urREp for both rules. Which rule does IREP prefer?
(c) Compute uRrppEn for the previous problem. Which rule does RIPPER
prefer?
5.10 Exercises 3L7
4. Consider a training set that contains 100 positive examples and 400 negative
examples. For each of the following candidate rules,
determine which is the best and worst candidate rule according to:
5 . Figure 5.4 illustrates the coverage of the classification rules R1, ,R2, and R3.
Determine which is the best and worst rule according to:
6. (a) Suppose the fraction of undergraduate students who smoke is 15% and
the fraction of graduate students who smoke is 23%. If one-fifth of the
college students are graduate students and the rest are undergraduates,
what is the probability that a student who smokes is a graduate student?
318 Chapter 5 Classification: Alternative Techniques
(b) Given the information in part (a), is a randomly chosen college student
more likely to be a graduate or undergraduate student?
(c) Repeat part (b) assuming that the student is a smoker.
(d) Suppose 30% of the graduate students live in a dorm but only l0% of
the undergraduate students live in a dorm. If a student smokes and lives
in the dorm, is he or she more likely to be a graduate or undergraduate
student? You can assume independence between students who live in a
dorm and those who smoke.
Table
5.10.DatasetforExercise
7.
Record A B C Class
1 0 0 0 +
2 0 0 1
e 0 I 1
t
0 1 I
0 0 1 +
o 1 0 1 +
7 1 0 1
8 1 0 I
9 1 1 1 +
10 I 0 1 +
(a) Estimate the conditional probabilities for P(A : 1l+), P(B : 11a),
P(C : 1l+), P(.4 : 1l-), P(B : 1l-), and P(C : 1l-) using the
same approach as in the previous problem.
5.10 Exercises 319
5.11.DatasetforExercise
Table 8.
lnstance A B C Olass
I 0 0 1
2 1 0 1 +
J 0 1 0
4 1 0 0
5 1 0 I +
t 0 0 1 +
7 1 I 0
8 0 0 0
q
0 1 0 +
10 1 1 I +
(b) Use the conditional probabilities in part (a) to predict the class label for
a test sample (A : l, B : l,C :1) using the naive Bayes approach.
( c ) C o m p a r eP ( A : 1 ) , P(B: 1), and P(A: !,8 :1). Statethe relation-
ships between A and B.
(d) Repeat the analysis in part (c) using P(A : l), P(B : 0), and P(A:
1 , , B: 0 ) .
( e ) C o m p a r eP ( A : I , B : I l C l a s s : * ) a g a i n s tP ( A : l l C l a s s : * ) a n d
P(B : llClass: 1). Are the variables conditionally independent given
the class?
9. (a) Explain how naive Bayes performs on the data set shown in Figure 5.46.
(b) If each class is further divided such that there are four classes (AI, 42,
,B1, and B2), will naive Bayes perform better?
(c) How will a decision tree perform on this data set (for the two-class prob-
lem)? What if there are four classes?
10. Repeat the analysis shown in Example 5.3 for finding the location of a decision
boundary using the following information:
11. Figure 5.47 illustrates the Bayesian belief network for the data set shown in
Table 5.12. (Assume that all the attributes are binary).
(a) Draw the probability table for each node in the network.
32O Chapter 5 Classification: Alternative Techniques
Attributes
Figure
5.46,DatasetforExercise
9.
Figure
5.47,Bayesian
belief
network.
(b) Use the Bayesian network to compute P(Engine : Bad, Air Conditioner
: Broken).
12. Given the Bayesian network shown in Figure 5.48, compute the following prob-
abilities:
5,12.DatasetforExercise
Table 11.
Mileage Engine Air Conditioncr Number of Records Number of Records
with Car Value:Hi with Car Value:Lo
H Good Working J 4
Hi Good Broken 1 z
Hi Bad Working 1 K
Hi Bad Broken 0 A
Lo Good Working I 0
Lo Good Broken 1
Lo Bad Working I 2
Lo Bad Broken 0 z
P(B=bad)=0.1 P ( F = e m p t y=)0 . 2
Figure
5.48,Bayesian network
belief 12.
forExercise
(u) Classify the data point r : 5.0 according to its 1-, 3-, 5-, and 9-nearest
neighbors (using majority vote).
(b) Repeat the previous analysis using the distance-weighted voting approach
describedin Section 5.2.1.
Table
5.13.DatasetforExercise
13.
(t
x U.D 3.0 4.O 4.6 4.9 5 . 2 5 . 3 7.0 9.5
v + + -t- +
where nii is the number of examples from class i with attribute value Vi and
n, is the number of examples with attribute value [.
Consider the training set for the loan classification problem shown in Figure
5.9. Use the MVDM measure to compute the distance between every pair of
attribute values for the Home 0wner and Marital Status attributes.
1 5 . For each of the Boolean functions given below, state whether the problem is
linearly separable.
lo. (a) Demonstrate how the perceptron model can be used to representthe AND
and OR functions between a pair of Boolean variables.
(b) Comment on the disadvantage of using linear functions as activation func-
tions for multilayer neural networks.
1 7 . You are asked to evaluate the performance of two classification models, M1 and
M2. The test set you have chosen contains 26 binary attributes, labeled as ,4
throtgh Z.
Table 5.14 shows the posterior probabilities obtained by applying the models
to the test set. (Only the posterior probabilities for the positive class are
shown). As this is a two-classproblem, P(-) : 1- P(+) and P(-lA, . . . , Z) :
I - P(+lA, . . . , Z). Assume that we are mostly interested in detecting instances
from the positive class.
(a) PIot the ROC curve for both M1 and M2. (You should plot them on the
same graph.) Which model do you think is better? Explain your reasons.
(b) For model M1, suppose you choosethe cutoffthreshold to be f :0.5. In
other words, any test instances whose posterior probability is greater than
C will be classified as a positive example. Compute the precision, recall,
and F-measure for the model at this threshold value.
5.1-0 Exercises 323
Table5.14.Posterior
orobabilities 17.
forExercise
Instance Tlue Class P ( + \ A , . . , Z ,M r ) P ( + 1 , 4 , . . . , 2 , M 2 )
1 + 0. 0.61
2 + 0.69 0.03
3 0.44 0.68
4 0.55 0.31
5 + 0.67 0.45
6 + 0.47 0.09
7 0.08 0.38
8 0.15 0.05
q
+ 0.45 0.01
10 0.35 0.04
(c) Repeat the analysis for part (c) using the same cutoff threshold on model
M2. Compare the F-measure results for both models. Which model is
better? Are the results consistent with what you expect from the ROC
curve?
(d) Repeat part (c) for model M1 using the threshold t : 0.1. Which thresh-
old do you prefer, t : 0.5 or f : 0.1? Are the results consistent with what
you expect from the ROC curve?
18. Following is a data set that contains two attributes, X and Y, and two class
labels. "+" and "-". Each attribute can take three different values: 0' 1, or 2.
Number of
X Y Instances
+
0 0 0 100
1 0 0 U
2 0 0 100
0 1 10 100
I 1 10 0
2 I 10 100
0 2 0 100
1 2 0 0
2 2 0 100
The concept for the "+" class is Y : 1 and the concept for the "-" class is
X:0YX:2.
((1"
(a) Build a decision tree on the data set. Does the tree capture ffts 3,nd
tt-tt concepts?
324 Chapter 5 Classification: Alternative Techniques
(b) What are the accuracy, precision, recall, and F1-measure of the decision
tree? (Note that precision, recall, and F1-measureare defined with respect
to the "+" class.)
(") Build a new decision tree with the following cost function:
f 0, if i:i;
c(i,j):11, ifi:*,j:-:
, ir'i: -, j : +.
l ,*N'#B:ifii+fi#*tr
(Hint: only the leaves of the old decision tree need to be changed.) Does
the decision tree capture the "+" concept?
(d) What are the accuracy) precision, recall, and f'1-measure of the new deci-
sion tree?
19. (a) Consider the cost matrix for a two-classproblem. Let C(*, *) : C(-, -) :
p, C(+,-): C(-,1): q, and q ) p. Show that minimizingthe cost
function is equivalent to maximizing the classifier's accuracy.
(b) Show that a cost matrix is scale-invariant. For example, if the cost matrix
is rescaled from C(i, j) ------PC(i,j), where B is the scaling factor, the
decision threshold (Equation 5.82) will remain unchanged.
(c) Show that a cost matrix is translation-invariant. In other words, adding a
constant factor to all entries in the cost matrix will not affect the decision
threshold (Equation 5.82).
20. Consider the task of building a classifier from random data, where the attribute
values are generated randomly irrespective of the class labels. Assume the data
set contains records from two classes,"+" and "-." Half of the data set is used
for training while the remaining half is used for testing.
(a) Suppose there are an equal number of positive and negative records in
the data and the decision tree classifier predicts every test record to be
positive. What is the expected error rate of the classifieron the test data?
(b) Repeat the previous analysis assuming that the classifier predicts each
test record to be positive class with probability 0.8 and negative class
with probability 0.2.
(c) Suppose two-thirds of the data belong to the positive class and the re-
maining one-third belong to the negative class. What is the expected
error of a classifier that predicts every test record to be positive?
(d) Repeat the previous analysis assuming that the classifier predicts each
test record to be positive class with probabllity 213 and negative class
with orobabilitv 1/3.
5.10 Exercises 325
21. Derive the dual Lagrangian for the linear SVM with nonseparable data where
the objective function is
22. Consider the XOR problem where there are four training points:
( 1 ,1 ,- ) , ( 1 , 0+, ) ,( 0 ,1 ,+ ) , ( 0 , 0-,) .
23. Given the data sets shown in Figures 5.49, explain how the decision tree, naive
Bayes, and k-nearest neighbor classifiers would perform on these data sets.
326 Chapter 5 Classification: Alternative Techniques
Attrlbub Afrribuie6
Distinguishing
Anribule6 Noise Aflributes Anributes
Dislinguishing NoiseAtribut€s
lrl
tl
1l
I ClassA i:iii;:i;iii ClassA
lrrl
"'i
tt
ii '
ll
'' tr
,,
I Clss B
rrii' ClassB
llll I
,,i,':'il
rl
Attributes
Oistinguishing Oislinguishing
Attribute ret 1 Attributeset 2 Noise Attribules
ClassA ClassB ClassA ClassB ClassA
Figure
5.49.DatasetforExercise
23.
AssociationAnalysis:
BasicConceptsand
Algorithms
Many businessenterprisesaccumulate large quantities of data from their day-
to-day operations. For example, huge amounts of customer purchase data are
collected daily at the checkout counters of grocery stores. Table 6.1 illustrates
an example of such data, commonly known as market basket transactions.
Each row in this table correspondsto a transaction, which contains a unique
identifier labeled TID and a set of items bought by a given customer. Retail-
ers are interested in analyzing the data to learn about the purchasing behavior
of their customers. Such valuable information can be used to support a vari-
ety of business-related applications such as marketing promotions, inventory
management, and customer relationship management.
This chapter presents a methodology known as association analysis,
which is useful for discovering interesting relationships hidden in large data
sets. The uncovered relationships can be representedin the form of associa-
Table6.1. Anexample
of market transactions.
basket
TID Items
1 {Bread, Milk}
2 {Bread, Diapers, Beer, Eggs}
3 {Milk, Diapers, Beer, Cola}
4 {Bread, Milk, Diapers, Beer}
tr
{Bread, Milk, Diapers, Cola}
328 Chapter 6 Association Analvsis
tion rules or sets of frequent items. For example, the following rule can be
extracted from the data set shown in Table 6.1:
--'
{liapers} {eeer}.
The rule suggeststhat a strong relationship exists between the sale of diapers
and beer becausemany customers who buy diapers also buy beer. Retailers
can use this type of rules to help them identify new opportunities for cross-
selling their products to the customers.
Besidesmarket basket data, association analysis is also applicable to other
application domains such as bioinformatics, medical diagnosis, Web mining,
and scientific data analysis. In the analysis of Earth sciencedata, for example,
the association patterns may reveal interesting connections among the ocean,
land, and atmospheric processes.Such information may help Earth scientists
develop a better understanding of how the different elements of the Earth
system interact with each other. Even though the techniques presented here
are generally applicable to a wider variety of data sets,for illustrative purposes)
our discussionwill focus mainly on market basket data.
There are two key issuesthat need to be addressedwhen applying associ-
ation analysis to market basket data. First, discovering patterns from a large
transaction data set can be computationally expensive. Second, some of the
discoveredpatterns are potentially spurious becausethey may happen simply
by chance. The remainder of this chapter is organized around these two is-
sues. The first part of the chapter is devoted to explaining the basic concepts
of association analysis and the algorithms used to efficiently mine such pat-
terns. The second part of the chapter deals with the issue of evaluating the
discoveredpatterns in order to prevent the generation of spurious results.
ofmarket
0/1 representation
Table6.2.A binary data,
basket
TID Bread Milk Diapers Beer Eggs Cola
-l
I 1 0 0 0 0
2 1 0 1 I 1 0
3 0 1 1 1 0 1
4 1 1 1 I 0 0
5 1 1 1 0 0 I
This representationis perhaps a very simplistic view of real market basket data
becauseit ignores certain important aspects of the data such as the quantity
of items sold or the price paid to purchase them. Methods for handling such
non-binary data will be explained in Chapter 7.
Itemset and Support Count Let I : {h,i2,...,i'a} be the set of all items
in a market basket data and T : {h,t2,..-,t1"} be the set of all transactions'
Each transaction ti contains a subset of items chosen from 1. In association
analysis, a collection of zero or more items is termed an itemset. If an itemset
contains /c items, it is called a k-itemset. For instance, {Beer, Diapers, Mj-Ik}
is an example of a 3-itemset. The null (or empty) set is an itemset that does
not contain any items.
The transaction width is defined as the number of items present in a trans-
action. A transaction fy is said to contain an itemset X if X is a subset of
t7. For example, the secondtransaction shown in Table 6.2 contains the item-
set {Bread, Diapers} but not {Bread, Milk}. An important property of an
itemset is its support count, which refers to the number of transactions that
contain a particular itemset. Mathematically, the support count, o(X), for an
itemset X can be stated as follows:
where the symbol | . I denote the number of elements in a set. In the data set
shown in Table 6.2, the support count for {Beer, Diapers, Milk} is equal to
two because there are only two transactions that contain all three items.
o(XuY) .
Support, s(X ------+
I/) : 1 (6.1)
.Atr
o(X uY)
Confidence, c(X --+ Y) : (6.2)
"(x)
Example 6.1. Consider the rule {Uitt, Diapers} -----*{eeer}. Since the
support count for {ltitt<, Diapers, Beer} is 2 and the total number of trans-
actions is 5, the rule's support is 2f 5 :0.4. The rule's confidenceis obtained
by dividing the support count for {ttitt<, Diapers, Beer} by the support count
for {Uitt<, Diapers}. Since there are 3 transactions that contain milk and di-
apers,the confidencefor this rule is 213: 0.67. I
The proof for this equation is left as an exerciseto the readers (seeExercise 5
on page 405). Even for the small data set shown in Table 6.1, this approach
requires us to compute the support and confidencefor 36 - 27 * 1 : 602 rules.
More than 80% of the rules are discarded after applying mi,nsup : 20Vo and
minconf : 5070,thus making most of the computations become wasted. To
avoid performing needlesscomputations, it would be useful to prune the rules
early without having to compute their support and confidencevalues.
An initial step toward improving the performance of association rule min-
ing algorithms is to decouple the support and confidencerequirements. From
Equation 6.2, notice that the support of a rule X -----+Y depends only on
the support of its corresponding itemset, X U Y. For example, the following
rules have identical support because they involve items from the same itemset,
{Beer, Diapers, MiIk}:
----* ------
{Beer, Diapers} {t'li.ft}, {Beer, Milk} {Diapers},
-----r ----*
{Diapers, Milk} {eeer}, {eeer} {Diapers, Milk},
------ ------+
{uirt} {Beer,Diapers}, {oiapers} {Beer,Milk}.
If the itemset is infrequent, then all six candidate rules can be pruned imme-
diately without our having to compute their confidencevalues.
Therefore, a common strategy adopted by many association rule mining
algorithms is to decomposethe problem into two major subtasks:
1. FYequent Itemset Generation, whose objective is to find all the item-
sets that satisfy Lhemi,nsup threshold. These itemsets are called frequent
itemsets.
Figure
6.1.Anitemset
lattice.
t
N
M
thesupport
Figure6.2. Counting itemsets.
ofcandidate
To illustrate the idea behind the Apri,ore principle, consider the itemset
Iattice shown in Figure 6.3. Suppose {c, d, e} is a frequent itemset. Clearly,
any transaction that contains {c,d,e} must also contain its subsets, {",d},
{","}, {d,e}, {"}, {d}, and {e}. As a result, if {c,d,e} is frequent, then
all subsets of {c, d,e} (i.e., the shaded itemsets in this figure) must also be
frequent.
/
334 Chapter 6 Association Analysis
Figure6,3. An illustration
ol lhe Aprioriprinciple.
lt {c,d,e} is frequent,
thenall subsets
of this
itemset
arefrequent.
\r
Pruned
\\--_
Supersets
Figure6.4.Anillustration pruning.
ofsupport-based lt {a,b} isinfrequent, of {a,b}
thenallsupersets
areinfrequent.
Apri,ori, is the first association rule mining algorithm that pioneered the use
of support-based pruning to systematically control the exponential growth of
candidate itemsets. Figure 6.5 provides a high-level illustration of the frequent
itemset generation part of the Apriori algorithm for the transactions shown in
336 Chapter 6 Association Analysis
Candidate
1-ltemsets
Minimum count= 3
support
Itemsetsremoved
becauseof low
support
Candidate
3-ltemsets
Itemset Count
{Bread,Diapers,Milk} 3
Figure6.5. llluskation
of frequent generation
itemset usingtheApriorialgorithm.
(l).(i).1:6*6*1:13
candidates, which represents a 68% reduction in the number of candidate
itemsets even in this simple example.
The pseudocode for the frequent itemset generation part of the Apri,ori,
algorithm is shown in Algorithm 6.1. Let Cp denote the set of candidate
k-itemsets and Fr denote the set of frequent k-itemsets:
o The algorithm initially makes a single passover the data set to determine
the support of each item. Upon completion of this step, the set of all
frequent 1-itemsets, f'r, will be known (steps 1 and 2).
The algorithm terminates when there are no new frequent itemsets gen-
erated,i.e., Fp: 0 (step 13).
The frequent itemset generation part of the Apri,orz algorithm has two im-
portant characteristics. First, it is a level-wise algorithm; i.e., it traversesthe
itemset lattice one level at a time, from frequent 1-itemsets to the maximum
size of frequent itemsets. Second, it employs a generate-and-test strategy
for finding frequent itemsets. At each iteration, new candidate itemsets are
generated from the frequent itemsets found in the previous iteration. The
support for each candidate is then counted and tested against the minsup
threshold. The total number of iterations neededby the algorithm is k-.* a 1,
where k-r* is the maximum size of the frequent itemsets.
In principle, there are many ways to generate candidate itemsets. The fol-
Iowing is a list of requirements for an effectivecandidate generation procedure:
2. It must ensure that the candidate set is complete, i.e., no frequent item-
sets are left out by the candidate generation procedure. To ensure com-
pleteness,the set of candidate itemsets must subsume the set of all fre-
quent itemsets, i.e., Vk : fi, C Cp.
3. It should not generate the same candidate itemset more than once. For
example, the candidate itemset {a,b,c,d} can be generated in many
ways-by merging {a,b, c} with {d}, {b, d} with {o, {c} with {a,b, d},
"},
etc. Generation of duplicate candidates leads to wasted computations
and thus should be avoided for efficiency reasons.
CandidateGeneration
Itemset
Itreer, trreao, uora
iBeer,Bread,Diapers)
{Beer,Bread,Milk}
{Beer,Bread,Eqqs}
{Beer,Cola,Diapers}
{Beer,Cola,Milk}
itseer, uola, Eggs'
{Beer,Diaoers.Milk}
{Beer,Diapers,Eggs}
{Beer, Milk, Eqqs}
{Bread,Cola,Diapers}
{Bread, Cola, Milk}
{Dreao, uora, trqqsl
{Bread,Diapers,Milk}
{Breao,Drapers,
Eqqsl
{Bread, Milk, Eqqs}
lola, Diapers, Milk)
{Cola,Diapers,Eqqs)
{Cola,Milk,Eggs}
{Diapers,Milk,Eqqs)
Freouent
2-itemset
Candidate
CandidateGeneration
Pruning
Itemset
Itemset
{Beer,Diapers.Bread)
{Bread,Diapers,
Milk}
Frequent {Beer, Diapers.Milk}
1-itemset {Bread. Diapers,Milk}
{Bread, Milk, Beer}
Figure6,7.Generatingandpruning
candidate
k-itemsets
bymerging
a frequent(k - l)-itemsetwitha
item.Notethatsomeofthecandidates
frequent because
areunnecessary theirsubsets are infrequent,
This approach, howevet, does not prevent the same candidate itemset from
being generated more than once. For instance, {Bread, Diapers, Mitk} can
be generatedby merging {Bread, Diapers} with {Milk}, {Bread, MiIk} with
{niapers}, or {Diapers, MiIk} with {Bread}. One way to avoid generating
6.2 Requent Itemset Generation 34L
duplicate candidates is by ensuring that the items in each frequent itemset are
kept sorted in their lexicographic order. Each frequent (/c-l)-itemset X is then
extended with frequent items that are lexicographically larger than the items in
X. For example, the itemset {Bread, Diapers} can be augmented with {Milk}
since Mil-k is lexicographically larger than Bread and Diapers. However, we
should not augment {Diapers, MiIk} with {Bread} nor {Bread, Milk} with
{Oi-apers} becausethey violate the lexicographic ordering condition.
While this procedure is a substantial improvement over the brute-force
method, it can still produce a large number of unnecessarycandidates. For
example, the candidate itemset obtained by merging {Beer, Diapers} with
{Ui-ft} is unnecessarybecauseone of its subsets, {Beer, Milk}, is infrequent.
There are several heuristics available to reduce the number of unnecessary
candidates. For example, note that, for every candidate k-itemset that survives
the pruning step, every item in the candidate must be contained in at least
k - L of the frequent (k - 1)-itemsets. Otherwise, the candidate is guaranteed
to be infrequent. For example, {Beer, Diapers, Milk} is a viable candidate
3-itemset only if every item in the candidate, including Beer, is contained in
at least two frequent 2-itemsets. Since there is only one frequent 2-itemset
containing Beer, all candidate itemsets involving Beer must be infrequent.
ai: b i ( f o r z : 1 , 2 , . . . , k - 2 ) a n d a 7 r - 1I b n - t .
In Figure 6.8, the frequent itemsets {Bread, Diapers} and {Bread, Milk} are
merged to form a candidate 3-itemset {Bread, Diapers, Milk}. The algorithm
does not have to merge {Beer, Di-apers} with {Diapers, Milk} becausethe
first item in both itemsets is different. Indeed, if {Beer, Diapers, Milk} is a
viable candidate, it would have been obtained by merging {Beer, Diapers}
with {Beer, MiIk} instead. This example illustrates both the completenessof
the candidate generation procedure and the advantagesof using lexicographic
ordering to prevent duplicate candidates. However, becauseeach candidate is
obtained by merging a pair of frequent (k-1)-itemsets, an additional candidate
pruning step is needed to ensure that the remaining k - 2 subsets of the
candidate are frequent.
342 Chapter 6 Association Analysis
Frequent
2-itemset
Candidate Candidate
Generation Pruning
Frequent
2-itemsel
andpruning
Figure6.8,Generating candidate
k-itemsets pairsoffrequent
bymerging (k- l)-itemsets.
Transaction,
t
12356
Level 2
12
Figure6.9. Enumerating
subsets
ofthreeitemsfroma transaction
r.
items in f whose labels are greater than or equal to 5. The number of ways to
specify the first item of a 3-itemset contained in t is illustrated by the Level
1 prefix structures depicted in Figure 6.9. For instance, 1 [2J 5 6 I represents
a 3-itemset that begins with item 1, followed by two more items chosen from
t h e s e t{ 2 , 3 , 5 , 6 } .
After fixing the first item, the prefix structures at Level 2 represent the
number of ways to select the seconditem. For example, 1 2 F 5 6l corresponds
to itemsets that begin with prefix (1 2) and are followed by items 3, 5, or 6.
Finally, the prefix structures at Level 3 representthe complete set of 3-itemsets
contained in t. For example, the 3-itemsets that begin with prefix {1 2} are
{1,2,3}, {7,2,5}, and {1,2,6}, while those that begin with prefix {2 3} are
{2,3,5} and {2,3,6}.
The prefix structures shown in Figure 6.9 demonstrate how itemsets con-
tained in a transaction can be systematically enumerated, i.e., by specifying
their items one by one, from the leftmost item to the rightmost item. We
still have to determine whether each enumerated 3-itemset correspondsto an
existing candidate itemset. If it matches one of the candidates, then the sup-
port count of the corresponding candidate is incremented. In the next section,
we illustrate how this matching operation can be performed efficiently using a
hash tree structure.
344 Chapter 6 Association Analysis
Leaf nodes
containing
candidate
2-itemsets
Transactions
Bread,Milk,Diapers,Cola
Figure6.10.Counting
thesupport
of itemsets
usinghashstructure.
Hash Function
-'-:,@
Transaction
F,*;]
,-.w
item listed in the Level 2 structures shown in Figure 6.9. For example, after
hashing on item 1 at the root node, items 2, 3, and 5 of the transaction are
hashed. Items 2 and 5 are hashed to the middle child, while item 3 is hashed
to the right child, as shown in Figure 6.12. This process continues until the
leaf nodes of the hash tree are reached. The candidate itemsets stored at the
visited leaf nodes are compared against the transaction. If a candidate is a
subset of the transaction, its support count is incremented. In this example, 5
out of the 9 leaf nodes are visited and 9 out of the 15 itemsets are compared
against the transaction.
1+ l23s6l
---
Transaction
t;;;-:-:r
lrzrcol
z+
-@
3+
Figure6.12.Subset ontheleftmost
operation oftherootof a candidate
subtree hashtree.
x105
4
3.5
f;3
E
I zs
a
6
E r.s
3
E
z1 iil
0.5
\:, Y
il
0 It=--**-*-
10
Size of ltemsel
, x105
E
o
E
;
d
E
z
-+J-f-+-+-+
0
0 10
Size of ltemsel
9 v
t
8 il
il
o_ il
o il .F.
o6 il vv
6
il
il
() il
YI
6 ill v
E3 t:
z
A\y
1
'+#t*.
0
10 15
Size ol ltemset
x105
10
E
o
o
v
I
I
o4
z
2 \b
1 ;
4*+-. v
0
10 15
Size of ltemset
Figure6.14.Effect
of average
transaction
widthonthenumber
ofcandidate itemsets.
andfrequent
are contained in the transaction. This will increase the number of hash tree
traversals performed during support counting.
A detailed analysis of the time complexity for the Apri,ori, algorithm is
presented next.
6.3 Rule Generation 349
- z)lcnl( costormerging
.irr - 2)lLn-r12.
iU
k:2 k:2
A hash tree is also constructed during candidate generation to store the can-
didate itemsets. Because the maximum depth of the tree is k, the cost for
populating the hash tree with candidate itemsets rs O(l[:2klcrl). During
candidate pruning, we need to verify that the ,k- 2 subsetsof every candidate
k-itemset are frequent. Since the cost for looking up a candidate in a hash
tree is O(k), the candidate pruning step require" O(D|:rk(k - z)lckl) time.
Example 6.2. Let X : {I,2,3} be a frequent itemset. There are six candi-
date associationrules that can be generatedfrom X: {1,2} - i3}, {1,3} -
.----* ----- ._--> ----*
{2}, {2,3} {1}, {U {2,3}, {2} {1,3}, and {3} {1,2}. As
each of their support is identical to the support for X, the rules must satisfy
the support threshold. r
Computing the confidenceof an associationrule does not require additional
scansof the transaction data set. Consider the rule {1,2} - {3}, which is
generatedfrom the frequent itemset X : {1,2,3}. The confi.dencefor this rule
is o({1, 2,3}) lo({1, 2}). Because{1, 2, 3} is frequent,the anti-monotoneprop-
erty of support ensuresthat {1,2} must be frequent, too. Since the support
counts for both itemsets were already found during frequent itemset genera-
tion, there is no need to read the entire data set again.
Unlike the support measure) confidence does not have any monotone property.
For example, the confidence for X ------. Y can be larger, smaller, or equal to the
c o n f i d e n c e f o r a n o t h erru l e * ,t,where *gX andf e Y (seeExercise
3 on page 405). Nevertheless,if we compare rules generated from the same
frequent itemset Y, the following theorem holds for the confidencemeasure.
To prove this theorem, consider the following two rules: Xt ---', Y - Xt and
X +Y-X,where
- XtcX. T h e c o n f i d e n c e otfh e r u l e sa r e o ( Y ) l o ( X / ) a n d
o(V) lo(X), respectively.SinceX/ is a subsetof X , o(Xt) > Therefore,
"(X).
the former rule cannot have a higher confidencethan the latter rule.
I
I
\
\
\
\
aa Pruned
'..- --.'
Rules
Figure6.15,Pruning
ofassociation
rulesusingtheconfidence
measure.
Table6.3.Listofbinaryattributes
fromthe1984United Congressional
States Voting
Records.
Source:
TheUCImachine learningrepository.
Table
6.4.Association
rules
extracted
fromthe1984
United
States Voting
Congressional Records.
Association Rule Confidence
{budget resolution : no, Mx-missile:no, aid to El Salvador : yes } 9r.0%
------+
{Republican}
{budget resolution : y€s, MX-missile:yes, aid to El Salvador : no } 97.5%
-----+{Democrat}
{crime: y€s, right-to-sue : y€s, physician fee freeze : yes} 93.5To
------+
{Republican}
{crime : no, right-to-sue : no, physician fee freeze : no} l00Yo
------+
{Democrat}
Figure6.16,Maximal
frequent
itemset.
which all frequerrt itemsets can be derived. For example, the frequent itemsets
shown in Figure 6.16 can be divided into two groups:
o Frequent il;emsetsthat begin with item a and that may contain items c,
d, or e. This group includes itemsets such as {o), {o.,c), {a,d}, {a,e},
and{a,c,e}.
Requent itemsel;sthat belong in the first group are subsets of either {a,c,e}
or {4, d}, while those that belong in the secondgroup are subsetsof {b, c, d, e}.
Hence, the maximal frequent itemsets {a, c, e}, {o, d}, and {b, c, d, e} provide
a compact representation of the frequent itemsets shown in Figure 6.16.
Maximal frequent itemsets provide a valuable representation for data sets
that can produce very long, frequent itemsets, as there are exponentially many
frequent itemsets in such data. Nevertheless,this approach is practical only
if an efficient algorithm exists to explicitly find the maximal frequent itemsets
without havingto enumerate all their subsets. We briefly describe one such
approach in Secl;ion6.5.
Despite providing a compact representation, maximal frequent itemsets do
not contain the support information of their subsets. For example, the support
of the maximal fiequent itemsets {a,c,e}, {o,d}, and {b,c,d,e} do not provide
any hint about the support of their subsets. An additional pass over the data
set is therefore needed to determine the support counts of the non-maximal
frequent itemsets. In some cases, it might be desirable to have a minimal
representation of frequent itemsets that preservesthe support information.
We illustrate such a representation in the next section.
1 aoc
abcd
oce
1,2,4
4 acde
de
Figure6.17.Anexample
oftheclosed
frequent (withminimum
itemsets countequalto40%).
support
transaction IDs. For example, since the node {b, c} is associatedwith transac-
tion IDs 1,2, and 3, its support count is equal to three. Flom the transactions
given in this diagram, notice that every transaction that contains b also con-
tains c. Consequently,the support for {b} is identical to {b, c} and {b} should
not be considered a closed itemset. Similarly, since c occurs in every transac-
tion that contains both a and d, the itemset {o,d} is not closed. On the other
hand, {b, c} is a closed itemset because it does not have the same support
count as any of its supersets.
Definition 6.5 (Closed Flequent Itemset). An itemset is a closed fre-
quent itemset if it is closedand its support is greater than or equal to m'insup.
In the previous example, assumingthat the support threshold is 40%, {b,c}
is a closed frequent itemset becauseits support is 60%. The rest of the closed
frequent itemsets are indicated by the shaded nodes.
Algorithms are available to explicitly extract closed frequent itemsets from
a given data set. Interested readers may refer to the bibliographic notes at the
end of this chapter for further discussionsof these algorithms. We can use the
closed frequent itemsets to determine the support counts for the non-closed
6.4 Compact Representation of Requent Itemsets 357
frequent itemsets. For example, consider the frequent itemset {o,d} shown
in Figure 6.17. Becausethe itemset is not closed, its support count must be
identical to one of its immediate supersets. The key is to determine which
superset(amon6;{a,b,d}, {a,c,d}, or {a,d,e}) has exactly the same support
count as {a, d}. The Apri,ori,principle states that any transaction that contains
the superset of .l-a,d) must also contain {o,d}. However, any transaction that
contains {o,d} does not have to contain the supersets of {a,d}. For this
reason, the support for {a, d} must be equal to the largest support among its
supersets.Since {a, c, d} has a larger support than both {a,b,d} and {a,d,e},
the support for .[a,d] must be identical to the support for {4, c, d}. Using this
methodology, an algorithm can be developed to compute the support for the
non-closed frequent itemsets. The pseudocode for this algorithm is shown in
Algorithm 6.4. The algorithm proceeds in a specific-to-generalfashion, i.e.,
from the largest to the smallest frequent itemsets. This is because,in order
to find the support for a non-closedfrequent itemset; the support for all of its
supersetsmust be known.
To illustrate the advairtageof using closed frequent itemsets, consider the
data set shown in Table 6.5, which contains ten transactions and fifteen items.
The items can be divided into three groups: (1) Group -4, which contains
items ar through as; Q) Group B, which contains items b1 through b5i and
(3) Group C, which contains items c1 through c5. Note that items within each
group are perfectly associatedwith each other and they do not appear with
items from another group. Assuming the support threshold is 20To,the total
number of frequent itemsets is 3 x (25- 1) : 93. However, there are only three
closedfrequentitemsetsin the data: ({41,a2tayta4lab},{bt,bz,bz,b+,b5},and
{"t,"z,cs,ca,c5}). It is often sufficient to present only the closed frequent
itemsets to the analysts instead of the entire set of frequent itemsets.
358 Chapter 6 Association Analysis
Table
6.5.Atransaction
datasetforminino itemsets.
closed
TID A7 A2 AJ A4 As O1 bz bs ba b5 C1 C2 ca C4 Cb
I I 1 I I 1 0 0 0 0 0 0 U U 0 0
2 I 1 1_ 1_ 1 0 0 0 0 0 0 0 0 0 0
J 1 I 1. l. I 0 0 0 0 0 0 0 0 0 0
4 0 0 0 0 0 I 1 1 1 1 0 0 0 0 0
( 0 0 0 0 0 I 1 1 1 1 0 0 0 0 0
t) 0 0 0 0 0 I 1 1 1 1 0 0 0 0 0
7 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1
e 0 0 0 0 0 0 0 0 0 0 1 L I 1 1
rl 0 0 0 0 0 0 0 0 0 0 1 1 1 I I
10 0 0 0 0 0 0 0 0 0 0 1 1 I 1 1
Figure6.18.Relationships frequent,
among maximal
frequent,
andclosed itemsets.
frequent
Closed frequent itemsets are useful for removing some of the redundant
association rules. An association rule X ------Y is redundant if there exists
another rule X/ ------Y' , where X is a subset of X/ and Y is a subset of Y/, such
that the support and confidence for both rules are identical. In the example
shown in Figure 6.L7, {b} is not a closedfrequent itemset while {b, c} is closed.
The association rule {b} - {d, e} is therefore redundant becauseit has the
sarne support and confidence as {b, c} - {d,"}. Such redundant rules are
not generated if closed frequent itemsets are used for rule generation.
Finally, note that all maximal frequent itemsets are closed because none
of the maximal frequent itemsets can have the same support count as their
immediate supersets. The relationships among frequent, maximal frequent,
and closed frequent itemsets are shown in Figure 6.18.
6.5 Alternative Methods for Generating Frequent Itemsets 359
Frequent
Itemset Frequent
Border null Itemset null
\
{a1,a2,...,an} {a1,a2,...,an} ltemsel
Border
(a) General-to-specific (b) Specific-to-general (c) Bidirectional
Figure6.19.General-to-specific,
specific{o-general,
andbidirectional
search.
store the candidate itemsets, but it can help to rapidly identify the fre-
quent itemset border, given the configuration shown in Figure 6.19(c).
\.--/
(a) Prefixtree.
Figure
6,20"Equivalence
classes ontheprefix
based andsuffix
labels
ofitemsets.
Figure
6,21.Breadth{irst
anddepth-first
traversals.
I \
I o )l
I I- t,
I
I I
I
qc I
I de I
\
tt
)r bcd
ade bce bde cde r/
//
--------t'
abce abde
abcde
Figure6.22.Generating
candidate
itemsets
usingthedepth{irst
approach.
performed on its subsets. For example, if the node bcde shown in Figure
6.22 is maximal frequent, then the algorithm does not have to visit the
subtrees rooted at bd,,be, c, d, and e becausethey will not contain any
maximal frequent itemsets. However, if abcis maximal frequent, only the
nodes such as ac and bc arc not maximal frequent (but the subtrees of
ac and bc may still contain maximal frequent itemsets). The depth-first
approach also allows a different kind of pruning based on the support
of itemsets. For example, suppose the support for {a,b,c} is identical
to the support for {a, b}. The subtrees rooted at abd and abe can be
skipped because they are guaranteed not to have any maximal frequent
itemsets. The oroof of this is left as an exerciseto the readers.
Horizontal
Data Layout VerticalData Layout
'l 2 1
1 2
4 2 3 4 3
5 5 4 5 o
6 7 8 I
7 I I
8 10
9
dataformat.
andvertical
Figure6.23,Horizontal
sized itemsets. [fowever, one problem with this approach is that the initial
set of TlD-lists may be too large to fit into main memory, thus requiring
more sophisticated techniquesto compressthe TlD-lists. We describe another
effective approach to represent the data in the next section.
Transaction
DataSet
b:1
(iv)Afterreading
TID=10
Figure6.24,Construction
of anFP-tree.
Figure 6.24 shows a data set that contains ten transactions and five items.
The structures ofthe FP-tree after reading the first three transactions are also
depicted in the diagram. Each node in the tree contains the label of an item
along with a counter that shows the number of transactions mapped onto the
given path. Initially, the FP-tree contains only the root node representedby
the nulf symbol. The FP-tree is subsequently extended in the following way:
1. The data set is scanned once to determine the support count of each
item. Infrequent items are discarded, while the frequent items are sorted
in decreasingsupport counts. For the data set shown in Figure 6.24, a
is the most frequent item, followed by b, c, d, and e.
6.6 FP-GrowthAlgorithm 365
2. The algoril;hm makes a second pass over the data to construct the FP-
tree. After reading the first transaction, {o,b), the nodes labeled as a
and b are created. A path is then formed from nulI --+a '--+b to encode
the transaction. Every node along the path has a frequency count of 1.
3. After reading the second transaction, {b,cd}, a new set of nodes is cre-
ated for items b, c, arrd d. A path is then formed to represent the
transaction by connecting the nodes null ---+b ---+c -* d. Every node
along this path also has a frequency count equal to one. Although the
first two transactions have an item in common, which is b, their paths
are disjoint becausethe transactions do not share a common prefix.
5. This processcontinues until every transaction has been mapped onto one
of the paths given in the FP-tree. The resulting FP-tree after reading
all the transactions is shown at the bottom of Figure 6.24.
The sizeof an FP-tree is typically smaller than the sizeof the uncompressed
data because ma,ny transactions in market basket data often share a few items
in common. In the best-case scenario, where all the transactions have the
same set of iterns, the FP-tree contains only a single branch of nodes. The
worst-casescenario happens when every transaction has a unique set of items.
As none of the transactions have any items in common, the size of the FP-tree
is effectively the same as the size of the original data. However, the physical
storage requirement for the FP-tree is higher because it requires additional
space to store pointers between nodes and counters for each item.
The size of an FP-tree also depends on how the items are ordered. If
the ordering scheme in the preceding example is reversed, i.e., from lowest
to highest support item, the resulting FP-tree is shown in Figure 6.25. The
tree appears to be denser because the branching factor at the root node has
increased from 2 to 5 and the number of nodes containing the high support
items such as a and b has increased from 3 to 12. Nevertheless, ordering
by decreasing support counts does not always lead to the smallest tree. For
exanlple, suppose we augment the data set given in Figure 6.24 with 100
transactions that contain {e}, 80 transactions that contain {d}, 60 transactions
366 Chapter 6 Association Analysis
..-. c:2
d:2
b:1
"'it
"-a:l
Figure6.25. An FP-treerepresentation
for thedataset shownin Figure6.24witha different
item
ordering
scheme.
that contain {"}, and 40 transactions that contain {b}. Item e is now most
frequent, followed by d, c, b, and a. With the augmented transactions, ordering
by decreasingsupport counts will result in an FP-tree similar to Figure 6.25,
while a schemebased on increasing support counts produces a smaller FP-tree
similar to Figure 6.2a$v).
An FP-tree also contains a list of pointers connecting between nodes that
have the same items. These pointers, represented as dashed lines in Figures
6.24 and 6.25, help to facilitate the rapid accessof individual items in the tree.
We explain how to use the FP-tree and its corresponding pointers for frequent
itemset generation in the next section.
c:3
d:1
Figure6.26.Decomposing
thefrequent generation
itemset problemintomultiple where
subproblems,
eachsubproblem
involves
findingfrequent
itemsets ine,d,c,b,anda.
ending
Table6.6.Thelistoffrequent
itemsets
ordered suffixes.
bytheircorresponding
Suffix FbequentItemsets
e {e}, {d,e}, {a,d,e}, {c,e},{a,e}
d {d}, {c,d}, {b,c,d}, {a,c,d}, {b,d}, {a,b,d}, {a,d}
c {c}, { b , c } , { a , b , c } ,{ a , c }
b {b}, {a,b}
a {a}
d:1 d:1
(b) ConditionalFP-treefor e
n u l l .)
I
d
c:1 c:1 a:2
(e) Prefixpathsendingin ce (f) Prefixpathsending
in ae
Figure6.27.Example
ofapplying
theFP-growth tofindfrequent
algorithm itemsets ine.
ending
1. The first step is to gather all the paths containing node e. These initial
paths are called prefix paths and are shown in Figure 6.27(a).
2. Fbom the prefix paths shown in Figure 6.27(a), the support count for e is
obtained by adding the support counts associatedwith node e. Assuming
that the minimum support count is 2, {e} is declared a frequent itemset
becauseits support count is 3.
6.6 FP-GrowthAlgorithm 369
(a) First, the support counts along the prefix paths must be updated
becausesome ofthe counts include transactions that do not contain
item e. For example, the rightmost path shown in Figure 6.27(a),
nufl ------+
b:2 -------+
c:2 ------+
e:1, includes a transaction {b,c} that
does not contain item e. The counts along the prefix path must
therefore be adjusted to 1 to reflect the actual number of transac-
tions containing {b, c, e}.
(b) The prefix paths are truncated by removing the nodes for e. These
nodes can be removed becausethe support counts along the prefix
pathsrhave been updated to reflect only transactions that contain e
and the subproblems of finding frequent itemsets ending in de, ce,
be, anrdae no longer need information about node e.
(c) After updating the support counts along the prefix paths, some
of the items may no longer be frequent. For example, the node b
appears only once and has a support count equal to 1, which means
that there is only one transaction that contains both b and e. Item b
can be safely ignored from subsequent analysis because all itemsets
ending in be must be infrequent.
The conditional FP-tree for e is shown in Figure 6.27(b). The tree looks
different than the original prefix paths becausethe frequency counts have
been updated and the nodes b and e have been eliminated.
A FP-growth usesthe conditional FP-tree for e to solve the subproblems of
finding frequent itemsets ending in de, ce, and ae. To find the frequent
itemsets ending in de, the prefix paths for d are gathered from the con-
ditional FP-tree for e (Figure 6.27(c)). By adding the frequency counts
associatedwith node d, we obtain the support count for {d,e}. Since
the support count is equal to 2, {d,e} is declared a frequent itemset.
Next, the algorithm constructs the conditional FP-tree for de using the
approach described in step 3. After updating the support counts and
removing the infrequent item c, the conditional FP-tree for de is shown
in Figure 6.27(d). Since the conditional FP-tree contains only one item,
37O Chapter 6 Association Analysis
Tabfe6.7.A 2-way
contingency
tableforvariables
A andB.
B B
A Jtl JLO fr+
T
A J01 ./ 00 fo+
J+O t/
preferences
Table6.8.Beverage a group
among of1000people.
Cof f ee Cof f ee
Tea 150 50 200
Tea 650 150 800
800 200 1000
6.7 Evaluation of Association Patterns 373
The information given in this table can be used to evaluate the association
rule {?ea,} ------,
{Cof f ee}. At fi.rstglance,it may appear that people who drink
tea also tend to drink coffee becausethe rule's support (15%) and confidence
(75%) values are reasonably high. This argument would have been acceptable
except that the fraction of people who drink coffee, regardless of whether they
drink tea, is 80%, while the fraction of tea drinkers who drink coffee is only
75%. Thus knowing that a person is a tea drinker actually decreasesher
probability of being a coffee drinker from 80% to 75Tol The rule {Tea) -,
{Cof f ee} is therefore misleading despite its high confidencevalue. r
The pitfall of confidence can be traced to the fact that the measure ignores
the support of the itemset in the rule consequent. Indeed, if the support of
coffee drinkers is taken into account, we would not be surprised to find that
many of the people who drink tea also drink coffee. What is more surprising is
that the fraction of tea drinkers who drink coffee is actually less than the overall
fraction of people who drink coffee, which points to an inverse relationship
between tea drinkers and coffeedrinkers.
Because of rbhelimitations in the support-confidence framework, various
objective measures have been used to evaluate the quality of association pat-
terns. Below, we provide a brief description of these measures and explain
some of their strengths and limitations.
(6.4)
which computes the ratio between the rule's confidence and the support of
the itemset in the rule consequent. For binary variables, Iift is equivalent to
another objective measure called interest factor, which is defined as follows:
I(4, B) :
s ( A ,B ) _ N"frr (6.5)
s(,4)x s(B) f+f +t
Interest factor compares the frequency of a pattern against a baseline fre-
quency computed under the statistical independence assumption. The baseline
frequency for a pair of mutually independent variables is
fn :1fr
f+ h+f+t
t?,f+t ir
or equivalently,"fn:t:;:: (6.6)
f
Chapter Association Analysis
Table
6.9.Contingency forthewordpairs
tables ({p,q}and{r,s}.
p p r r
q 880 bt, 930 s 20 50 70
q 50 20 70 5 CU 880 930
930 70 1000 70 930 r000
This equation follows from the standard approach of using simple fractions
as estimates for probabilities. The fraction fnlN is an estimate for the joint
probability P(A,B), while fia/,n/ and fyf N are the estimates for P(A) and
P(B), respectively.lt A and B are statistically independent,then P(A,B):
P(A) x P(B), thus leading to the formula shown in Equation 6.6. Using
Equations 6.5 and 6.6, we can interpret the measure as follows:
For the tea-coffeeexample shown in Table 6.8, 1: O.H3_8- :0.9375, thus sug-
gesting a slight negative correlation between tea drinkers and coffee drinkers.
IS Measure .I^9is an alternative measure that has been proposed for han-
dling asymmetric binary variables. The measure is defined as follows:
rs(A,B) : (6.e)
Note that .LS is large when the interest factor and support of the pattern
are large. For example, the value of 1^9for the word pairs {p, q} and {r, s}
shown in Table 6.9 are 0.946 and 0.286, respectively. Contrary to the results
given by interest factor and the @-coefficient, the 15 measure suggests that
the association between {p, q} i. stronger than {r, s}, which agreeswith what
we expect from word associationsin documents.
It is possible to show that 15 is mathematically equivalent to the cosine
measurefor binary variables (seeEquation2.T on page 75). In this regard, we
376 Chapter 6 Association Analysis
q q
I-S\ (- Ar,-s' : - $ : ,
-a l' : cosi,ne(A,B). (6.10)
J4A) AB) l a l x "I*n,l
The 1,S measurecan also be expressedas the geometric mean between the
confidenceof association rules extracted from a pair of binary variables:
Because the geometric mean between any two numbers is always closer to the
smaller number, the 1^9value of an itemset {p, q} is low whenever one of its
rules, p ---+ Q or e + p, has low confidence.
s ( A ,B ) s(A) x s(B)
B) :
IS1"a"o(,4, s(.4)x
s(A) x s(B)
since the value depends on s(A) and s(B), IS shares a similar problem as
the confidence measure-that the value of the measure can be quite large,
even for uncorrelated and negatively correlated patterns. For example, despite
the large 1^9 value between items p and q given in Table 6.10 (0.889), it is
still less than the expected value when the items are statistically independent
(ISi'auo : 0.9).
6.7 Evaluation of Association Patterns 377
Besides the measureswe have described so far, there are other alternative mea-
sures proposed for analyzing relationships between pairs of binary variables.
These measurescan be divided into two categories, symmetric and asym-
metric measures. A measureM is symmetric if M(A --'- B): M(B ------A).
For example, interest factor is a symmetric measure becauseits value is iden-
tical for the rules A ------+
B and B ---- A. In contrast, confidence is an
asymmetric measure since the confidence for A ---. B and B -'----+,4. may not
be the same. Symmetric measuresare generally used for evaluating itemsets,
while asymmetric measures are more suitable for analyzing association rules.
Tables 6.11 and 6.12 provide the definitions for some of these measures in
terms of the frequency counts of a 2 x 2 contingency table.
Tabfe
6.11.Examples objective
ofsymmetric fortheitemset
measures {A, B}.
Measure(Symbol) Definition
N"frr -"fr+f+r
Correlation (@) t;_-i--
v Jr+J+1J0+J+o
/ . ? \ l l o a \
Odds ratio (a) \IttIoo) / (,1'to"rot/
N"frr +Nloq lrr.ffl:/ot_lt_q
--Nt=1+
Kappa (rc)
f-+r=j;+j+"
/^rr \ llr r \
Interest (1) \1\ Jtl)/ U1+J+1/
/ r \ l/ /-.-.a'-- r-\
Cosine (1S) \Irt) / \t/ It+J+r)
Piatetsky-Shapiro (P,9) .flt - --T-
JL+J+1
M
Collective strength (S)
Jaccard (o fu f( fr + + f+t
All-confidence (h) -i"[#,#]
378 Chapter 6 Association Analysis
Tabfe6.12.Examples
ofasymmetric
objective fortheruleA ------+
measures B.
Measure (Symbol) Definition
Table
6.13.Example
ofcontingency
tables.
Example . t1 7 .t10 J01 foo
E1 8 123 83 424 1370
E2 8330 2 622 r046
Es 3954 3080 5 296r
Ea 2886 1363 1320 4431
E5 1500 2000 500 6000
E6 4000 2000 1000 3000
R. 9481 298 \27 94
Es 4000 2000 2000 2000
Es 7450 2483 4 63
Erc 61 2483 /l 7452
Table6.14.Rankings tablesusingthesymmetric
of contingency giveninTable
measures 6.11.
o a K I IS PS ^9 C h
n1 1 3 I tt 2 2 1 2 2
r 2 3 3
E2 2 1 2 7 3
E3 3 2 A ^ 5 1 3 o 8
F7
Ea i
8 3 3 I 3 4 7 5
n q q
E5 5 I 0 2 6 6 9
E6 6 9 5 5 o 4 E
J l) 7
,7 ,7
E7 6. I 1 8 7 1 1
F7
E8 8 10 8 8 8 8 8 1
I
q
Es 9 4 I 10 4 I 4 4
Erc 10 5 10 1 10 10 10 10 10
Table6.15.Rankings tablesusingtheasymmetric
ofcontingency giveninTable
measures 6.12.
) M J G L V F AV
r
E1 I 1
r 1 1 4 2 2 J
tr
E2 2 2 2 .) J 1 I 6
tr r 4
E3 3 2 2 6 o
q I
Ea 4 o 3 4 3 3
q F7
4 8 tr
2
E5 6 D d
r F7 n
3
E6 3 8 6 4
.7 q
E7 7 5 9 8 3 7
,7
Eg 8 9 7 10 8 8 7
'| q
Es 6 4 10 I I 10
Eto 10 10 8 10 h 1 0 10 8
factor and odds ratio. Furthermore, a contingency table such as -E1sis ranked
lowest according to the @-coefficient,but highest according to interest factor.
The results shown in Table 6.14 suggestthat a significant number of the mea-
sures provide conflicting information about the quality of a pattern. To under-
stand their differences,we need to examine the properties of these measures.
Inversion Property Consider the bit vectors shown in Figure 6.28. The
0/1 bit in each column vector indicates whether a transaction (row) contains
a particular item (column). For example, the vector A indicates that item a
380 Chapter 6 Association Analysis
A B CD EF
( ) (b) (c)
Figure6,28. Effectof theinversionoperation.
ThevectorsC andE areinversions
of vectorA, while
thevectorD is aninversion of vectors
B andF.
belongs to the first and last transactions, whereas the vector B indicates that
item b is contained only in the fifth transaction. The vectors C and E are in
fact related to the vector A-their bits have been inverted from 0's (absence)
to l's (presence),and vice versa. Similarly, D is related to vectors B and F by
inverting their bits. The process of flipping a bit vector is called inversion.
If a measure is invariant under the inversion operation, then its value for the
vector pair (C, D) should be identical to its value for (A, B). The inversion
property of a measure can be tested as follows.
Scaling Property Table 6.16 shows the contingency tables for gender and
the grades achieved by students enrolled in a particular course in 1993 and
2004. The data in these tables showed that the number of male students has
doubled since 1993, while the number of female students has increased by a
factor of 3. However, the male students in 2004 are not performing any better
than those in 1993 because the ratio of male students who achieve a high
grade to those who achieve a low grade is still the same, i.e., 3:4. Similarly,
the female students in 2004 are performing no better than those in 1993. The
associationbetween grade and gender is expected to remain unchangeddespite
changesin the sampling distribution.
6.16.Thegrade-gender
Table example.
Male Female
High High 60 60 720
Low Low 80 30 t1
740 90 230
(a) Sample data from 1993. (b) Sample data from 2004.
382 Chapter 6 Association Anal.ysis
Table
6.17.Properties
ofsymmetric
measures.
Svmbol Measure Inversion Null Addition Scaling
0 @-coefficient Yes No L\O
a odds ratio Yes No Yes
K Cohen's Yes No No
r Interest No No No
IS Cosine No Yes No
PS Piatetsky-Shapiro's Yes No No
,9 Collective strength Yes No No
e Jaccard No Yes No
h All-confidence No No No
Support No No No
From Table 6.17, notice that only the odds ratio (a) is invariant under
the row and column scaling operations. All other measures such as the f
coefficient, n, IS, interest factor, and collective strength (,9) change their val-
ues when the rows and columns of the contingency table are rescaled. Although
we do not discussthe properties of asymmetric measures(such as confidence,
J-measure, Gini index, and conviction), it is clear that such measuresdo not
preservetheir values under inversion and row/column scaling operations, but
are invariant under the null addition oneration.
Table6.18.Example
of a three-dimensional table.
contingency
t. -
f o . r + . . . +fx+ b . . . +x ' . . x f + + . . . t o (6.12)
J L\t2...Itc
AIk-1
With this definition, we can extend objective measures such as interest factor
and P,S, which are based on deviations from statistical independence'to more
than two variables:
Iy'ft-1 x ftrb...tr
T_
f , . r + . . . +fx+ b . . . +x . . . x f + + . . t u
ps : I+-- lvrlt
Table
6.19.A two-way
contingency
tablebetween
thesaleofhigh-definition
television
andexercise
machine.
Buy Buv Exercise Machine
HDTV Yes No
YES 99 81 180
No o4 66 t20
153 L47 300
Table6,20.Example
ofa three-way
contingency
table.
Customer lJuy Buy Exercise Machine Total
Group HDTV YCS No
College Students Yes I 9 10
No A 30 34
Working Adult YES 98 72 770
No 50 36 86
than college students who buy these items. For college students:
---' Ill0:10To,
c({UOfV:Yes} {Exercise machine:Yes})
-----* 4134: 7I.8To,
c({HOtV:No} {Exercise machine:Yes})
The rules suggest that, for each group, customers who do not buy high-
definition televisions are more likely to buy exercise machines, which contradict
the previous conclusion when data from the two customer groups are pooled
together. Even if alternative measures such as correlation, odds ratio' or
interest are applied, we still find that the sale of HDTV and exercise machine
is positively correlated in the combined data but is negatively correlated in
the stratified data (seeExercise 20 on page 414). The reversal in the direction
of association is known as Simpson's paradox.
The paradox can be explained in the following way. Notice that most
customers who buy HDTVs are working adults. Working adults are also the
largest group of customers who buy exercise machines. Because nearly 85% of
the customers are working adults, the observed relationship between HDTV
and exercise machine turns out to be stronger in the combined data than
what it would have been if the data is stratified. This can also be illustrated
mathematically as follows. Suppose
where afb andplqmay represent the confidence of the rule A ---, B in two
different strata, while cld and rf s may represent the confidence of the rule
A ------+
B in the two strata. When the data is pooled together' the confidence
valuesof the rules in the combineddata are (a+dl@+q) and (c+r)l@+s),
respectively. Simpson's paradox occurs when
a*P clr
b+q> d+r'
thus leading to the wrong conclusion about the relationship between the vari-
ables. The lesson here is that proper stratification is needed to avoid generat-
ing spurious patterns resulting from Simpson's paradox. For example, market
386 Chapter 6 Association Analysis
1000 1500
Itemssortedby support
Figure6.29.Support
distribution
of itemsinthecensus
dataset.
Table6.21.Grouping
theitemsinthecensus values.
datasetbasedontheirsupport
Group G1 G2 Gs
Support <t% r % - 9 0 % > gUYa
1-Dr
Number of Items I'dU 358 20
(6.13)
m i n [ 0 . 7 , 0 . 1 , 0 . 0 0 0 4 ]0.0004
: 0.00058
< 0.01.
m a x 1 0 . 7 , 0 . 1 , 0 . 0 0 0 4 10 . 7
I
Figure6.30.A transaction
datasetcontaining p, q,?fidr, where
threeitems, p isa highsupport
item
andq andr arelowsupportitems.
{ii} - { i r , i , r , .. . , i i - r , i i + r , . . . , i n }
s({i,1,i.2,...,i,n))
'
m a x [ s ( i 1 ) ,s ( i z ) , . . . , s ( i r ) ]
ty),'' 't!':ll
< gt" l,-t!'.tl'
h-confidence(x)
m a x i s ( e 1) , s \ z z ) , .. . , s \ L k ) )
and thus can be incorporated directly into the mining algorithm. Furthermore,
h-confidenceensuresthat the items contained in an itemset are strongly asso-
ciated with each other. For example, suppose the h-confidenceof an itemset
X is 80%. If one of the items in X is present in a transaction, there is at least
an 80% chance that the rest of the items in X also belong to the same trans-
action. Such strongly associatedpatterns are called hyperclique patterns.
Conceptual Issues
al. [315] to extend the traditional notion of support to more general patterns
and attribute types.
6.9 Bibliographic Notes 393
o
(t)
(tt
c
(u
o
(E
C)
o
U'
.Jt
(E
ct) E
'c= o o
U)
9"0 o< <D
=g
'-G
c -v= x
Yd
.=
(J
C'
' = K E e -. g=:
f/'tL .c
o- id
C)
CE
.:..;,+-FF< (D
6.9, =v xA
,*F st-+A"x^
E = =? aJ>
<D
-(E i?f?i?-iY
.l)
EB o
::
FE
6< oo
(5
o
o -c
E -o
o
oo
(o
E
E
5
C"
-a?
(o
o
5
.9
IJ-
,9.9 a
S :EB :
gE€
'EecEgE
394 Chapter 6 Association Analysis
Implementation Issues
Researchactivities in this area revolve around (1) integrating the mining ca-
pability into existing databasetechnology, (2) developing efficient and scalable
mining algorithms, (3) handling user-specifiedor domain-specific constraints,
and ( ) post-processingthe extracted patterns.
There are several advantages to integrating association analysis into ex-
isting database technology. First, it can make use of the indexing and query
processingcapabilities of the database system. Second,it can also exploit the
DBMS support for scalability, check-pointing, and parallelization [301]. The
SETM algorithm developed by Houtsma et al. [265] was one of the earliest
algorithms to support association rule discovery via SQL queries. Since then,
numerous methods have been developedto provide capabilities for mining as-
sociation rules in databasesystems. For example, the DMQL [258]and M-SQL
[267] query languagesextend the basic SQL with new operators for mining as-
sociation rules. The Mine Rule operator [283] is an expressiveSQL operator
that can handle both clustered attributes and item hierarchies. Tsur et al.
[322] developed a generate-and-testapproach called query flocks for mining
associationrules. A distributed OLAP-based infrastructure was developed by
Chen et al. l2aL] for mining multilevel association rules.
Dunkel and Soparkar l2a8] investigated the time and storage complexity
of the Apri,ori algorithm. The FP-growth algorithm was developedby Han et
al. in [259]. Other algorithms for mining frequent itemsets include the DHP
(dynamic hashing and pruning) algorithm proposed by Park et al. [292] and
the Partition algorithm developed by Savasereet al [303]. A sampling-based
frequent itemset generation algorithm was proposed by Toivonen [320]. The
algorithm requires only a single pass over the data, but it can produce more
candidate itemsets than necessary. The Dynamic Itemset Counting (DIC)
algorithm [239] makes only 1.5 passesover the data and generatesless candi-
date itemsets than the sampling-based algorithm. Other notable algorithms
include the tree-projection algorithm [223] and H-Mine [295]. Survey articles
on frequent itemset generation algorithms can be found in 1226,262]. A repos-
itory of data sets and algorithms is available at the Frequent Itemset Mining
Implementations (FIMI) repository (http:llfirni.cs.helsinki.fi). Parallel algo-
rithms for mining association patterns have been developed by various authors
[224,256,287,306,337]. A survey of such algorithms can be found in [333].
Online and incremental versionsof associationrule mining algorithms had also
been proposed by Hidber [260] and Cheung et aL. 12421.
Srikant et al. [313] have consideredthe problem of mining associationrules
in the presenceof boolean constraints such as the following:
6.9 Bibliographic Notes 395
Given such a constraint, the algorithm looks for rules that contain both cook-
ies and milk, or rules that contain the descendent items of cookies but not
ancestor items of wheat bread. Singh et al. [310] and Ng et al. [288] had also
developed alternative techniques for constrained-based association rule min-
ing. Constraints can also be imposed on the support for different itemsets.
This problem was investigated by Wang et al. [324], Liu et al. in [279], and
Seno et al. [305].
One potential problem with association analysis is the large number of
patterns that can be generated by current algorithms. To overcomethis prob-
lem, methods to rank, summarize, and filter patterns have been developed.
Toivonen et al. [321] proposed the idea of eliminating redundant rules using
structural rule covers and to group the remaining rules using clustering.
Liu et al. [280]applied the statistical chi-squaretest to prune spurious patterns
and summarized the remaining patterns using a subset of the patterns called
direction setting rules. The use of objective measures to filter patterns
has been investigated by many authors, including Brin et al. [238], Bayardo
and Agrawal [235], Aggarwal and Yu 12271,and DuMouchel and Pregibon[2a7].
The properties for many of these measureswere analyzed by Piatetsky-Shapiro
[297], Kamber and Singhal [270], Hilderman and Hamilton [261], and Tan et
al. [318]. The grade-genderexample used to highlight the importance of the
row and column scaling invariance property was heavily influenced by the
discussion given in [286] by Mosteller. Meanwhile, the tea-coffeeexample il-
lustrating the limitation of confidence was motivated by an example given in
[238] by Brin et al. Becauseof the limitation of confidence,Brin et al. [238]
had proposed the idea of using interest factor as a measure of interesting-
ness. The all-confidence measure was proposed by Omiecinski [289]. Xiong
et al. [330] introduced the cross-support property and showed that the all-
confi.dencemeasure can be used to eliminate cross-support patterns. A key
difficulty in using alternative objective measuresbesidessupport is their lack
of a monotonicity property, which makes it difficult to incorporate the mea-
sures directly into the mining algorithms. Xiong et al. [328] have proposed
an efficient method for mining correlations by introducing an upper bound
function to the fcoefficient. Although the measure is non-monotone, it has
an upper bound expressign that can be exploited for the efficient mining of
strongly correlated itempairs.
Fabris and Fleitas [249] have proposed a method for discovering inter-
esting associationsby detecting the occurrencesof Simpson's paradox [309].
Megiddo and Srikant [282] described an approach for validating the extracted
396 Chapter 6 Association Analysis
Application fssues
Bibliography
[223] R. C. Agarwal, C. C. Aggarwal, and V. V. V. Prasad. A Ttee Projection Algorithm
for Generation of Flequent ltemsets. Journal of Parallel and Distri,buted Computing
(Speci,alIssue on Hi,gh Performance Data Mining),61(3):350-371, 2001.
12241 R. C. Agarwal and J. C. Shafer. Parallel Mining of Association Rules. IEEE Transac-
t'ions on Knowledge and, Data Eng'ineering,8(6):962-969, March 1998.
12251C. C. Aggarwal, Z. Sun, and P. S. Yu. Online Generation of Profile Association Rules.
In Proc. of the lth IntI. Conf. on Knowled,ge D'iscouerg and, Data Mining, pages 129-
133, New York, NY, August 1996.
[226] C. C. Aggarwal and P. S. Yu. Mining Large Itemsets for Association Rules. Dafa
Engineering B ullet'in, 2l (l) :23-31, March 1998.
12271 C. C. Aggarwal and P. S. Yu. Mining Associations with the Collective Strength
Approach. IEEE Tfans. on Knowled,ge and Data Eng,ineer'ing,13(6):863-873, Jan-
uary/February 2001.
[228] R. Agrawal, T. Imielinski, and A. Swami. Database mining: A performance perspec-
tive. IEEE Transactions on Knowledge and Data Eng'ineering, S:9L4 925, 1993.
1229] R. Agrawal, T. Imielinski, and A. Swami. Mining association rules between sets of
items in large databases. In Proc. ACM SIGMOD IntI. Conf. Management of Data,
pages 207-216, Washington, DC, 1993.
f230] R. Agrawal and R. Srikant. Mining Sequential Patterns. ln Proc. of Intl. Conf. on
Data Engineering, pages 3-14, Taipei, Taiwan, 1995.
1231] K. Ali, S. Manganaris, and R. Srikant. Partial Classification using Association Rules.
In Proc. of the ?rd Intl. Conf. on Knowledge Discouery and, Data M'ining, pages 115
118, Newport Beach, CA, August 1997.
12321 D. Barbarii, J. Couto, S. Jajodia, and N. Wu. ADAM: A Testbed for Exploring the
Use of Data Mining in Intrusion Detection. SIGMOD Record,,30(4):15 24,2001.
[233] S. D. Bay and M. Pazzani. Detecting Group Differences: Mining Contrast Sets. Dota
Min'ing and Know ledge Dis couery, 5 (3) :2L3-246, 200I.
[234] R. Bayardo. Efficiently Mining Long Patterns from Databases. In Proc. of 1998 ACM-
SIGMOD Intl. Conf. on Management of Data, pages 85-93, Seattle, WA, June 1998.
[235] R. Bayardo and R. Agrawal. Mining the Most Interesting Rules. In Proc. of the Sth
Intl. Conf. on Knowledge Discouerg and Data Min'ing, pages 145-153, San Diego, CA,
August 1999.
[236] Y. Benjamini and Y. Hochberg. Controlling the False Discovery Rate: A Practical
and Powerful Approach to Multiple Testing. Journal Rogal Statistical Society 8,57
(1):289-300, 1995.
1237] R. J. Bolton, D. J. Hand, and N. M. Adams. Determining Hit Rate in Pattern Search.
In Proc. of the ESF Etploratory Workshop on Pattern Detect'i,on and Discouery in
Data Mini,ng, pages 36-48, London, UK, September 2002.
[238] S. Brin, R. Motwani, and C. Silverstein. Beyond market baskets: Generalizing associ-
ation rules to correlations. In Proc. ACM SIGMOD IntI. Conf. Management of Data,
pages265-276, Tucson, AZ, 1997.
[239] S. Brin, R. Motwani, J. Ullman, and S. Tsur. Dynamic Itemset Counting and Impli-
cation Rules for market basket data. In Proc. of 1997 ACM-SIGMOD IntI. Conf. on
Management of Data, pages 255 264, T\rcson, AZ, J:lrre L997.
1240] C. H. Cai, A. tr\r, C. H. Cheng, and W. W. Kwong. Mining Association Rules with
Weighted Items. In Proc. of IEEE Intl. Database Eng'ineering and Appli,cations Sgmp.,
pages 68-77, Cardiff, Wales, 1998.
398 Chapter 6 Association Analysis
[241] Q. Chen, U. Dayal, and M. Hsu. A Distributed OLAP infrastructure for E-Commerce.
In Proc. of the lth IFCIS IntI. Conf. on Cooperatiue Information Systems, pages 209-
220, Edinburgh, Scotland, 1999.
12421 D. C. Cheung, S. D. Lee, and B. Kao. A General Incremental Technique for Maintaining
Discovered Association Rules. In Proc. of the Sth IntI. Conf. on Database Systems for
Aduanced Appl'ications, pages 185-194, Melbourne, Australia, 1997.
[243] R. Cooley, P. N. Tan, and J. Srivastava. Discovery of Interesting Usage Patterns
from Web Data. In M. Spiliopoulou and B. Masand, editors, Aduances in Web Usage
AnalEsis and User ProfiIing, volume 1836, pages 163-182. Lecture Notes in Computer
Science, 2000.
12441P. Dokas, L.Ertoz, V. Kumar, A. Lazarevic, J. Srivastava, and P. N. Tan. Data Mining
for Network Intrusion Detection. In Proc. NSF Workshop on Nert Generation Data
M'ini,ng, Baltimore, MD, 2002.
1245] G. Dong and J. Li. Interestingness of discovered association rules in terms of
neighborhood-based unexpectedness. In Proc. of the 2nd, Paci,fi,c-Asia Conf. on Knowl-
ed,geDiscouery and Data Min'i,ng, pages 72-86, Melbourne, Australia, April 1998.
[246] G. Dong and J. Li. Efficient Mining of Emerging Patterns: Discovering Tbends and
Differences. In Proc. of the 5th Intl. Conf. on Knowledge Discouery and Data M'ining,
pages 43-52, San Diego, CA, August 1999.
12471 W. DuMouchel and D. Pregibon. Empirical Bayes Screening for Multi-Item Associa-
tions. In Proc. of the 7th IntI. Conf. on Knowledge D'iscouerg and, Data Mining, pages
67-76, San Flancisco, CA, August 2001.
[248] B. Dunkel and N. Soparkar. Data Organization and Access for Efficient Data Mining.
In Proc. of the 15th Intl. Conf. on Data Engineering, pages 522-529, Sydney, Australia,
March 1999.
12491 C. C. Fabris and A. A. Fleitas. Discovering surprising patterns by detecting occurrences
of Simpson's paradox. In Proc. of the 19th SGES Intl. Conf. on Knowledge-Based,
Systems and"Applied Artificial Intelligence), pages 1,48-160, Cambridge, UK, December
1999.
[250] L. Feng, H. J. Lu, J. X. Yu, and J. Han. Mining inter-transaction associations with
templates. In Proc, of the 8th IntI. Conf. on Inforrnation and Knowled,ge Managemept,
pages 225-233, Kansas City Missouri, Nov 1999.
[251] A. A. Freitas. Understanding the crucial differences between classification and discov-
ery of association rules a position paper. SIGKDD Erplorations,2(l):6549, 2000.
12521 J. H. Friedman and N. L Fisher. Bump hunting in high-dimensional data. Statisti.cs
aniL Computing, 9(2):123-143, April 1999.
[253] T. Fukuda, Y. Morimoto, S. Morishita, and T. Tokuyama. Mining Optimized Asso-
ciation Rules for Numeric Attributes. In Proc. of the 15th Symp. on Principles of
Database Systems, pages 182 191, Montreal, Canada, June 1996.
12541D. Gunopulos, R. Khardon, H. Mannila, and H. Toivonen. Data Mining, Hypergraph
TYansversals, and Machine Learning. In Proc. of the 16th Sgmp. on Princi,ples of
Database Sgstems, pages 209-216, T\rcson, AZ,May 1997.
[255] E.-H. Han, G. Karypis, and V. Kumar. Min-Apriori: An Algorithm for Finding As-
sociation Rules in Data with Continuous Attributes. http://www.cs.umn.edu/-han,
L997.
[256] E.-H. Han, G. Karypis, and V. Kumar. Scalable Parallel Data Mining for Association
Rules. In Proc. of 1997 ACM-SIGMOD IntI. Conf. on Management of Data, pages
277-288. T\rcson. AZ.Mav 1997.
Bibliography 399
1274] M. Kuramochi and G. Karypis. Frequent Subgraph Discovery. In Proc. of the 2001
IEEE Intl. Conf. on Data Mi,ning, pages 313-320, San Jose, CA, November 2001.
1275] W. Lee, S. J. Stolfo, and K. W. Mok. Adaptive Intrusion Detection: A Data Mining
Approach. Artificial Intelligence Reu'iew,14(6) :533-567, 2000.
1276] W. Li, J. Han, and J. Pei. CMAR: Accurate and Efficient Classification Based on
Multiple Class-association Rules. In Proc. of the 2001 IEEE IntI. Conf. on Data
M'ining, pages 369 376, San Jose, CA, 2001.
12771B. Liu, W. Hsu, and S. Chen. Using General Impressions to Analyze Discovered
Classification Rules. In Proc. of the Srd Intl. Conf. on Knowledge Discouery and Data
Mining, pages 31-36, Newport Beach, CA, August 1997.
12781B. Liu, W. Hsu, and Y. Ma. Integrating Classification and Association Rule Mining.
In Proc. of the lth IntI. Conf. on Knowledge D'iscouery and, Data M'ini,ng, pages 80-86,
New York, NY, August 1998.
1279] B. Liu, W. Hsu, and Y. Ma. Mining association rules with multiple minimum supports.
In Proc. of the Sth Intl. Conf. on Knowledge Discouerg and Data Mining, pages 125
134, San Diego, CA, August 1999.
1280] B. Liu, W. Hsu, and Y. Ma. Pruning and Summarizing the Discovered Associations. In
Proc. of theSthIntI. Conf. onKnowledgeDiscoueryandDataMining, pages125 134,
San Diego, CA, August 1999.
1281] A. Marcus, J. L Maletic, and K.-I. Lin. Ordinal association rules for error identifi-
cation in data sets. In Proc. of the 10th Intl. Conf. on Inforrnation and, Knowledge
Management, pages 589-591, Atlanta, GA, October 2001.
[282] N. Megiddo and R. Srikant. Discovering Predictive Association Rules. In Proc. of the
Ith Intl. Conf. on Knowled,ge Discouery and Data Min'ing, pages 274-278, New York,
August 1998.
[283] R. Meo, G. Psaila, and S. Ceri. A New SQL-like Operator for Mining Association
Rules. In Proc. of the 22nd VLDB Conf., pages I22 133, Bombay, India, 1-996.
[284j R. J. Miller and Y. Yang. Association Rules over Interval Data. In Proc. of 1997
ACM-SIGMOD Intl. Conf. on Management of Data, pages 452-461, T\rcson, LZ,May
1997.
[285] Y. Morimoto, T. Fukuda, H. Matsuzawa, T. Tokuyama, and K. Yoda. Algorithms for
mining association rules for binary segmentations of huge categorical databases. In
Proc. of the 2lth VLDB Conf., pages 380-391, New York, August 1998.
[286] F. Mosteller. Association and Estimation in Contingency Tables. Journal of the Amer-
ican Statistical Association 63:1-28. 1968,
12871 A. Mueller. Fast sequential and parallel algorithms for association rule mining: A
comparison. Technical Report CS-TR-3515, University of Maryland, August 1995.
[288] R. T. Ng, L. V. S. Lakshmanan, J. Han, and A. Pang. Exploratory Mining and Pruning
Optimizations of Constrained Association Rules. In Proc. of 1998 ACM-SIGMOD IntI.
Conf. on Management of Data, pages 13-24, Seattle, WA, June 1998.
1289] E. Omiecinski. Alternative Interest Measures for Mining Associations in Databases.
IEEE Ttans. on Knowledge and" Data Engineering, 15(1):57-69, January/February
2003.
[290] B. Ozden, S. Ramaswamy, and A. Silberschatz. Cyclic Association Rules. In Proc. of
the llth IntI. Conf. on Data Eng., pages 412 42I, Orlando, FL, February 1998.
[291] A. Ozgur, P. N. Tan, and V. Kumar. RBA: An Integrated Framework for Regression
based on Association Rules. In Proc. of the SIAM IntI. Conf. on Data M'ining, pages
2 M 2 7 , O r l a n d o ,F L , A p r i l 2 0 0 4 .
Bibliography 4OL
1292] J. S. Park, M.-S. Chen, and P. S. Yu. An efiective hash-based algorithm for mining
association rrles. SIGMOD Record,25(2):175 186, 1995.
[293] S. Parthasarathy and M. Coatney. Efficient Discovery of Common Substructures in
Macromolecules. In Proc. of the 2002 IEEE IntI. Conf. on Data M'ining, pages 362-369,
Maebashi City, Japan, December 2002.
[294] N. Pasquier, Y. Bastide, R. Taouil, and L. Lakhal. Discovering frequent closed itemsets
for association rules. In Proc. of the 7th Intl. Conf. on Database Theory 0CDT'99),
pages 398 416, Jerusalem, Israel, January 1999.
[295] J. Pei, J. Han, H. J. Lu, S. Nishio, and S. Tang. H-Mine: Hyper-Structure Mining of
Frequent Patterns in Large Databases. In Proc. of the 2001 IEEE Intl. Conf. on Data
M'ining, pages 441-448, San Jose, CA, November 2001.
[296] J. Pei, J. Han, B. Mortazavi-Asl, and H. Zhtt. Mining Access Patterns Efficiently from
Web Logs. In Proc. of the lth Pacific-Asia Conf. on Knowledge Discouery and" Data
Mining, pages 396-407, Kyoto, Japan, April 2000.
1297] G. Piatetsky-Shapiro. Discovery, Analysis and Presentation of Strong Rules. In
G. Piatetsky-Shapiro and W. Frawley, editors, Knowledge Discouery in Databases,
pages 229-248. MIT Press, Cambridge, MA, 1991.
[298] C. Potter, S. Klooster, M. Steinbach, P. N. Tan, V. Kumar, S. Shekhar, and C. Car-
valho. Understanding Global Teleconnections of Climate to Regional Model Estimates
of Amazon Ecosystem Carbon Fluxes. Global Change Biology, 70(5):693-703, 20A4.
[299] C. Potter, S. Klooster, M. Steinbach, P. N. Tan, V. Kumar, S. Shekhar, R. Myneni,
and R. Nemani. Global Teleconnections of Ocean Climate to Terrestrial Carbon Flux.
J. Geophysical Research, 108(D17), 2003.
1300] G. D. Ramkumar, S. Ranka, and S. Tsur. Weighted Association Rules: Model and
Algorithm. http: //www.cs.ucla.edu/ "c zdemof tsw f , 1997.
13o1lS . Sarawagi, S. Thomas, and R. Agrawal. Integrating Mining with Relational Database
Systems: Alternatives and Implications. In Proc. of 1998 ACM-SIGMOD IntI. Conf.
on Management of Data, pages 343-354, Seattle, WA, 1998.
[302] K. Satou, G. Shibayama, T, Ono, Y. Yamamura, E. Furuichi, S. Kuhara, and T. Takagi.
Finding Association Rules on Heterogeneous Genome Data. In Proc. of the Pacific
Symp. on Biocomputing, pages 397-408, Hawaii, January 1997.
[303] A. Savasere, E. Omiecinski, and S. Navathe. An efficient algorithm for mining associ-
ation rules in large databases. In Proc. of the 21st Int. Conf. on Very Large Databases
(VLDB'gs), pages 432-444, Z:uicln, Switzerland, September 1995.
1304] A. Savasere, E. Omiecinski, and S. Navathe. Mining for Strong Negative Associations
in a Large Database of Customer Tbansactions. In Proc. of the llth Intl. Conf. on
Data Engineering, pages 494 502, Orlando, Florida, February 1998.
[305] M. Seno and G. Karypis. LPMiner: An Algorithm for Finding FYequent Itemsets Using
Length-Decreasing Support Constraint. In Proc. of the 2001 IEEE Intl. Conf. on Data
Min'ing, pages 505-512, San Jose, CA, November 2001.
[306] T. Shintani and M. Kitsuregawa. Hash based parallel algorithms for mining association
rules. In Proc of the lth IntI. Conf . on Parallel and, Di;stributed, Info. Sgstems, pages
19-30, Miami Beach, FL, December 1996.
[307] A. Silberschatz and A. Tirzhilin. What makes patterns interesting in knowledge discov-
ery systems. IEEE Trans. on Knowledge and Data Engineering,8(6):970-974, 1996.
[308] C. Silverstein, S. Brin, and R. Motwani. Beyond market baskets: Generalizing associ-
ation rules to dependence rules. Data Mining and Knowledge D'iscouery, 2(1):39-68,
1998.
4O2 Chapter 6 Association Analysis
[325] K. Wang, S. H. Tay, and B. Liu. Interestingness-Based Interval Merger for Numeric
Association Rules. In Proc. of the lth IntL Conf. on Knowledge Discouerg and Data
Min'ing, pages 121-128, New York, NY, August 1998.
[326] G. I. Webb. Preliminary investigations into statistically valid exploratory rule dis-
covery. In Proc. of the Australasian Data Mi,ni,ng Workshop (AusDM1?), Canberra,
Australia, December 2003.
[327] H. Xiong, X. He, C. Ding, Y.Zhang, V. Kumar, and S. R. Holbrook. Identification
of Functional Modules in Protein Complexes via Hyperclique Pattern Discovery. In
Proc. of the Pacif,c Sgmposium on Biocomputing, (PSB 2005),Mad, January 2005.
13281H. Xiong, S. Shekhar, P. N. Tan, and V. Kumar. Exploiting a Support-based Upper
Bound of Pearson's Correlation Coefficient for Efficiently Identifying Strongly Corre-
lated Pairs. In Proc. of the 10th IntI. Conf. on Knowled,ge Di,scouery and Data Mining,
pages 334 343, Seattle, WA, August 2004.
[329] H. Xiong, M. Steinbach, P. N. Tan, and V. Kumar. HICAP: Hierarchial Clustering
with Pattern Preservation. In Proc. of the SIAM Intl. Conf. on Data Mining, pages
279 290, Orlando, FL, April 2004.
[330] H. Xiong, P. N. Tan, and V. Kumar. Mining Strong Affinity Association Patterns in
Data Sets with Skewed Support Distribution. In Proc. of the 2003 IEEE IntI. Conf.
on Data M'in'ing, pages 387 394, Melbourne, FL,2OO3.
[331] X. Yan and J. Han. gSpan: Graph-based Substructure Pattern Mining. ln Proc. of
the 2002 IEEE Intl. Conf. on Data Mining, pages 72I 724, Maebashi City, Japan,
December 2002.
1332] C. Yang, U. M. Fayyad, and P. S. Bradley. Efficient discovery of error-tolerant frequent
itemsets in high dimensions. In Proc. of the 7th Intl. Conf. on Knowled,ge Discouery
and, Data M'in'ing, pages 194 203, San Francisco, CA, August 2001.
f333] M. J. Zaki. Parallel and Distributed Association Mining: A Survey. IEEE Concurrency,
special issue on Parallel Mechanisrns for Data Min'ing,7(4):14-25, December 1999.
[334] M. J. Zaki. Generating Non-Redundant Association Rules. In Proc. of the 6th IntI.
Conf. on Knowledge Discouery and Data Min'ing, pages 34-43, Boston, MA, August
2000.
[335] M. J. Zaki. Efficiently mining frequent trees in a forest. In Proc. of the 8th IntI.
Conf. on Knowledge Di,scouery and Data Mining, pages 71-80, Edmonton, Canada,
July 2002.
f336] M. J. Zaki and M. Orihara. Theoretical foundations of association rules. In Proc. of
the 1998 ACM SIGMOD Workshop on Research Issues in Data Mining and, Knowledge
Discouery, Seattle, WA, June 1998.
[337] M. J. Zaki, S. Parthasarathy, M. Ogihara, and W. Li. New Algorithms for Fast
Discovery of Association Rules. In Proc. of the Srd Intl. Conf . on Knowled,ge Discouery
and"Data Mining, pages 283-286, Newport Beach, CA, August 1997.
[338] H. Zhang, B. Padmanabhan, and A. T\rzhilin. On the Discovery of Significant Statis-
tical Quantitative Rules. In Proc. of the 10th IntL Conf. on Knowled,ge D'iscouery and
Data Mi.ning, pages 374-383, Seattle, WA, August 2004.
13391 Z. Zhang, Y. Lu, and B. Zhang. An Effective Partioning-Combining Algorithm for
Discovering Quantitative Association Rules. In Proc. of the 1st Paci,fic-Asia Conf. on
Knowled"geDiscouery and, Data Mi,ning, Singapore, 1997.
[340] N. Zhong, Y. Y. Yao, and S. Ohsuga. Peculiarity Oriented Multi-database Mining. In
Proc. of the ?rd European Conf. of Principles and Practice of Knowledge Discouery in
Databases,pages 136-146, Prague, Czech Republic, 1999.
4O4 Chapter 6 Association Analysis
6.10 Exercises
1. For each of the following questions, provide an example of an association rule
from the market basket domain that satisfies the following conditions. Also,
describe whether such rules are subjectively interesting.
6.22.Example
Table ofmarket transactions.
basket
CustomerID Tlansaction ID Items ljouqht
I {a,d,e}
1 0024 {a,b,c,e}
2 0012 {a,b,d,e}
2 0031 {a, c, d, e}
3 0015 {b,c,e}
3 0022 {b,d,e}
4 0029 {",d}
4 0040 {a,b,c}
5 0033 {a, d,e}
tr
0038 { a , b ,e }
(a) Compute the support for itemsets {e}, {b,d}, and {b,d,e} by treating
each transaction ID as a market basket.
(b) Use the results in part (a) to compute the confidence for the associa-
tion rules {b,d} ---, {e} and {"} ------+
{b,d}. Is confidence a symmetric
measure?
(") Repeat part (a) by treating each customer ID as a market basket. Each
item should be treated as a binary variable (1 if an item appears in at
Ieast one transaction bought by the customer, and 0 otherwise.)
(d) Use the results in part (c) to compute the confidence for the association
rules {b, d} - {e} and {"}------ {b,d,}.
( e ) Suppose s1 and c1 are the support and confidence values ofan association
rule r when treating each transd,ction ID as a market basket. Also, let s2
and c2 be the support and confidence values of r when treating each cus-
tomer ID as a market basket. Discuss whether there are any relationships
between sr and s2 or c1 and,c2.
6.10 Exercises 4O5
3. (a) What is the confidence for the rules A ------A and A ---- A?
(b) Let crt c2t and ca be the confidence values of the rules {p} ------+ {q},
{p} - {q,r}, and {p, r} - {g}, respectively. If we assume that c1, c2,
and ca have different values, what are the possible relationships that may
exist among cI, c2, and ca? Which rule has the lowest confidence?
(c) Repeat the analysis in part (b) assuming that the rules have identical
support. Which rule has the highest confidence?
(d) Tiansitivity: Suppose the confidence of the rules A - B and B -------,
C
are larger than some threshold, m'inconf . Is it possible that A -----*C has
a confidence less than mi,nconf?
(a) A characteristic rule is a rule of the form {p) - {qt,qr,. . . , g,}, where
the rule antecedent contains only a single item. An itemset of size k can
produce up to k characteristic rules. Let ( be the minimum confidence of
all characteristic rules generated from a given itemset:
C ( { p r , p z ,.. . , p k } ) m i n I c ( { p 1 }- { p z , p s , .. . , p k } ) , - - -
'({Pn} ' {Pr'Ps" ' 'Pl"-1}) ]
5. Prove Equation 6.3. (Hint: First, count the number of ways to create an itemset
that forms the left hand side of the rule. Next, for each size k itemset selected
for the left-hand side, count the number of ways to choosethe remainin1d-k
items to form the right-hand side of the rule.)
4OG Chapter 6 Association Analysis
6.23.Market
Table basket
transactions,
Tlansaction ID Items Boueht
1 tMilk, Beer, Diapers]
2 {Bread, Butter, Milk}
3 {Milk, Diapers, Cookies}
4 {Bread, Butter, Cookies}
{Beer, Cookies, Diapers}
o {Milk, Diapers, Bread, Butter}
7 {Bread, Butter, Diapers}
8 {Beer, Diapers}
9 {Milk, Diapers, Bread, Butter}
10 {Beer, Cookies}
(a) What is the maximum number of association rules that can be extracted
from this data (including rules that have zero support)?
(b) What is the maximum size of frequent itemsets that can be extracted
(assuming minsup > 0)?
(c) Write an expression for the maximum number of size-3 itemsets that can
be derived from this data set.
(d) Find an itemset (of size 2 or larger) that has the largest support.
(e) Find a pair of items, a and b, such that the rules {o} -, {b} and {b} -----+
{a} have the same confidence.
{ 1 , 2 , 3 } , { 1 , 2 , 4 ) , { r , 2 , 5 } , { r , 3 , 4 } , { 1 ,3 , 5 } , { 2 , 3 , 4 } , { 2 , 3 , 5 } , { 3 , 4 , 5 } .
Assume that there are only five items in the data set.
Table6.24.Example
of market
basket
transactions.
Tlansaction ID Items Bought
1 { a , b ,d , e }
2 {b,c,d}
3 {a,b,d,e}
4 {a, c, d, e}
r
J
{b, c, d, e}
0 {b,d,e}
7 {", d}
8 { a , b ,c }
I {a, d,e}
10 {b, d}
data set shown in Table 6.24 with m'insup : 30yo, i.e., any itemset occurring
in less than 3 transactions is considered to be infrequent.
(a) Draw an itemset lattice representing the data set given in Table 6.24.
Label each node in the Iattice with the following letter(s):
o N: If the itemset is not considered to be a candidate itemset by
the Apriori. algorithm. The-re are two reasons for an itemset not to
be considered as a candidate itemset: (1) it is not generated at all
during the candidate generation step, or (2) it is generated during
the candidate generation step but is subsequently removed during
the candidate pruning step becauseone of its subsets is found to be
infrequent.
o F: If the candidate itemset is found to be frequent by the Apri'ori
algorithm.
o I: If the candidate itemset is found to be infrequent after support
counting.
(b) What is the percentage of frequent itemsets (with respect to all itemsets
in the lattice)?
(c) What is the pruning ratio of the Apri,ori algorithm on this data set?
(Pruning ratio is defined as the percentage of itemsets not considered
to be a candidate because (1) they are not generated during candidate
generation or (2) they are pruned during the candidate pruning step.)
(d) What is the false alarm rate (i.e, percentage of candidate itemsets that
are found to be infrequent after performing support counting)?
9. The Apriori algorithm uses a hash tree data structure to effrciently count the
support of candidate itemsets. Consider the hash tree for candidate 3-itemsets
shown in Figure 6.32.
408 Chapter 6 Association Analysis
Figure6,32,Anexample
ofa hashtreestructure.
(a) Given a transaction that contains items {1,3,4,5,8}, which of the hash
tree leaf nodes will be visited when fi.nding the candidates of the transac-
tion?
(b) Use the visited leaf nodes in part (b) to determine the candidate itemsets
that are contained in the transaction {1,3,4,5,8}.
{ t , 2 , 3 } , { r , 2 , 6 } , { 1 , 3 , 4 } ,{ 2 , 3 , 4 ) ,{ 2 , 4 , 5 } ,{ 3 , 4 , 6 } ,{ 4 , 5 , 6 }
(a) Construct a hash tree for the above candidate 3-itemsets. Assume the
tree uses a hash function where all odd-numbered items are hashed to
the left child of a node, while the even-numbered items are hashed to the
right child. A candidate k-itemset is inserted into the tree by hashing on
each successiveitem in the candidate and then following the appropriate
branch of the tree according to the hash value. Once a leaf node is reached,
the candidate is inserted based on one of the following conditions:
Condition 1: If the depth of the leaf node is equal to k (the root is
assumed to be at depth 0), then the candidate is inserted regardless
of the number of itemsets already stored at the node.
Condition 2: If the depth of the leaf node is less than k, then the candi-
date can be inserted as long as the number of itemsets stored at the
node is less than mars'ize. Assume mars'ize:2 for this question.
Condition 3: If the depth of the leaf node is less than k and the number
of itemsets stored at the node is equal to mars'ize, then the leaf
node is converted into an internal node. New leaf nodes are created
as children of the old leaf node. Candidate itemsets previouslv stored
6.10 Exercises 4Og
Figure
6.33.Anitemset
lattice
in the old leaf node are distributed to the children based on their hash
values. The new candidate is also hashed to its appropriate leafnode.
(b) How many leaf nodes are there in the candidate hash tree? How many
internal nodes are there?
(c) Consider a transaction that contains the following items: {1,2,3,5,6}.
Using the hash tree constructed in part (a), which leaf nodes will be
checked against the transaction? What are the candidate 3-itemsets con-
tained in the transaction?
11. Given the lattice structure shown in Figure 6.33 and the transactions given in
Table 6.24,label each node with the following letter(s):
12. The original association rule mining formulation uses the support and confi-
dence measures to prune uninteresting rules.
ALO Chapter 6 Association Analysis
(a) Draw a contingency table for each of the following rules using the trans-
actions shown in Table 6.25.
Table6,25.Example
ofmarket transactions.
basket
Tlansaction ID Items Bought
I { a , b ,d , e }
2 {b,c,d}
3 { a , b ,d , e }
4 {a, c, d, e}
E
{b,c,d,e}
6 {b,d,e}
7 {",d}
8 {a,b,c}
I {a,d,e}
10 {b,d}
Rules: {b} * {"}, {r} - {d,}, {b} - {d}, {"} - {"}, {c}------ ia}.
(b) Use the contingency tables in part (a) to compute and rank the rules in
decreasing order according to the following measures.
i. Support.
ii. Confidence.
iii. Interest(X----+Y): t##PV).
iv. IS(X ------Y) --
X ------+
v. Klosgen( Y) : \IF@fi x (P(Y lX) - P (Y)), whereP(Y lX) :
P(X\
Y)t -:1+++EF).
vi. Oddsratio(X------ e6.Y1e1X.v1'
13. Given the rankings you had obtained in Exercise 12, compute the correlation
between the rankings of confidence and the other five measures. Which measure
is most highly correlated with confidence? Which measure is least correlated
with confidence?
14. Answer the following questions using the data sets shown in Figure 6.34. Note
that each data set contains 1000 items and 10,000 transactions. Dark cells
indicate the presence of items and white cells indicate the absence of items. We
will apply l}ae Apriori, algorithm to extract frequent itemsets with m'insup :
I0% (i.e., itemsets must be contained in at least 1000 transactions)?
(a) Which data set(s) will produce the most number of frequent itemsets?
6.L0 Exercises 4LL
(b) Which data set(s) will produce the fewest number of frequent itemsets?
(c) Which data set(s) will produce the longest frequent itemset?
(d) Which data set(s) will produce frequent itemsets with highest maximum
support?
(e) Which data set(s) will produce frequent itemsets containing items with
wide-va,rying support levels (i.e., items with mixed support, ranging from
less than 20% to more than 70%).
15. (u) Prove that the / coefficientis equal to 1 if and only if fn : fr+ : f +r.
(b) Showthat if A andB areindependent, thenP(A, B)xP(A, B) : P(A, B)x
P(A, B).
(") Showthat Yule's Q and Y coefficients
-
n
\4 : f/rrloo /ro/otl
L/,r/00T /./rl
r, lrfitrf*- tfit;I'1
t'ff 'J* + 'If.of")
are normalized versions of the odds ratio.
(d) Write a simplified expression for the value of each measure shown in Tables
6.11 and 6.12 when the variables are statistically independent.
(a) What is the range of this measure? When does the measure attain its
maximum and minimum values?
(b) How does M behave when P(,4, B) is increased while P(,a) and P(B)
remain unchanged?
(c) How does M behave when P(A) is increased while P(,4, B) and P(B)
remain unchanged?
(d) How does M behave when P(B) is increased while P(A, B) and P(A)
remain unchanged?
(e) Is the measure symmetric under variable permutation?
(f) What is the value of the measure when A and B are statistically indepen-
dent?
(g) Is the measure null-invariant?
(h) Does the measure remain inva.riant under row or column scaling opera-
tions?
(i) How does the measure behave under the inversion operation?
412 Chapter 6 Association Analysis
Items
I
rr
2000 2000
o
-t I
4000 () 4000
(U T
o
6000 $u I
6000
F
8000
l. It
I
8000
Items
l-
I
I
2000 2000
I
o o
4000 - I 4000
I
o
qt I
U'
6000 rr 6000
F
C'
lI r - tr
I rl
I. 8000 8000
-
II
Items Items
2000 2000
o o
4000 o 1 0 %a r e 1 s 4000
o o
(U (U 90% are 0s
at, at,
(I' 6000 (U
(uniformlydistributed) 6000
F F
8000 8000
Figute
6.34.Figures
forExercise
14.
6.10 Exercises 4Lg
t7. Suppose we have market basket data consisting of 100 transactions and 20
items. If the support for item ais25To, the support for item bis90% and the
support for itemset {a, b} is 20%. Let the support and confidence thresholds
be L0% and 60%, respectively.
(a) Compute the confidence of the association rule {o} * {b}. Is the rule
interesting according to the confidence measure?
(b) Compute the interest measure for the association pattern {4, b}. Describe
the nature of the relationship between item a and item b in terms of the
interest measure.
(c) What conclusions can you draw from the results of parts (a) and (b)?
(d) Prove that if the confidence of the rule {o} * {b} is less than the support
of {b}, then:
i. c({a}------'
{bi) > c({di------{bi),
ii. c({a} ---' {b}) > .s({b}),
where c(.) denote the rule confidence and s(') denote the support of an
itemset.
18. Table 6.26 shows a 2 x 2 x 2 contingency table for the binary variables A and
B at different values of the control variable C.
Table.
Table6.26.A Contingency
A
1 0
1 0 15
C=0 B 0 15 30
1 5 0
C=1 B
0 0 15
(a) For table I, compute support, the interest measure, and the / correla-
tion coeffi.cient for the association pattern {A, B}. Also, compute the
confidence of rules A -- B and B --- A.
4L4 Chapter 6 AssociationAnalysis
Table6.27.Contingency 19,
forExercise
tables
A A
A A
(b) For table II, compute support, the interest measure, and the @ correla-
tion coefficient for the association pattern {A, B}. Also, compute the
confidence of rules A ---, B and,B - A.
(c) What conclusions can you draw from the results of (a) and (b)?
20. Consider the relationship between customers who buy high-definition televisions
and exercise machines as shown in Tables 6.19 and 6.20.
For each of the measures given above, describe how the direction of association
changes when data is pooled together instead of being stratified.
AssociationAnalysis:
AdvancedConcepts
The association rule mining formulation described in the previous chapter
assumes that the input data consists of binary attributes called items. The
presenceof an item in a transaction is also assumed to be more important than
its absence. As a result, an item is treated as an asymmetric binary attribute
and only frequent patterns are considered interesting.
This chapter extends the formulation to data sets with symmetric binary,
categorical, and continuous attributes. The formulation will also be extended
to incorporate more complex entities such as sequencesand graphs. Although
the overall structure of association analysis algorithms remains unchanged, cer-
tain aspects of the algorithms must be modified to handle the non-traditional
entities.
This rule suggests that most Internet users who shop online are concerned
about their personal privacy.
4LG Chapter 7 Association Analysis: Advanced Concepts
Table7.1.Internet
survey
datawithcategorical
attributes.
Gender Level o t State Computer Uhat Shop Privacy
Education at l{ome Online Online Concerns
Female Graduate Illinois Yes Yes Yes Yes
Male College California No No No No
Male Graduate Michigan Yes Yes Yes Yes
Female College Virginia No No Yes Yes
Female Graduate California Yes No No Yes
Male College Minnesota Yes Yes Yes Yes
Male College Alaska Yes Yes Yes No
Male High School Oregon Yes No No No
Female *: No
Graduate Texas
l:: ):
Table7.2.Internet
survey
dataafterbinarizing
categorical
andsymmetric
binary
attributes.
Male Female Education Education Privacy Privacy
: Graduate : College : Yes :NO
0 t 1 0 I 0
I 0 0 1 0 1
I 0 I 0 1 0
0 I 0 1
I I 0
0 1 L 0 1 0
I 0 0 I 1 0
1 0 0 I 0 1
1 0 0 0 0 1
0 1 0 0 I
i
7.L HandlingCategoricalAttributes 41,7
NewYork
Michigan
California
Minnesota
Massachusetts
Table7.3. Internet
survey attributes.
datawithcontinuous
Gender Age Annual No. of Hours Spent No. of Email Privacy
Income Online per Week Accounts Concern
Female 26 90K 20 4 Yes
Male 51 135K 10 2 No
Male 29 80K 10 3 Yes
Female 45 120K 15 3 Yes
Female 31 95K 20 D Yes
Male 25 55K 25 D Yes
Male 37 100K 10 I No
2 No
Male
t"i1"
47
26 *1
65K 8
12 I
I:
following intervals:
A g e € [ 1 2 , 1 6 ) ,A g e € [ 1 6 , 2 0 ) ,A g e e 1 2 0 , 2 4 ) , . . . , A g e€ [ 5 6 , 6 0 ) ,
Table7.4. Internet
survey
dataafterbinarizing andcontinuous
categorical attributes.
Male Female Age Age Age Privacy Privacy
< 1 3 € [13,21) € [21,30) : Yes :NO
0 1 0 0 1 I U
1 0 0 0 0 0 I
1 0 0 0 1 I 0
0 1 0 0 0 1 0
0 I U 0 0 1 0
1 0 0 0 I 1 0
1 0 0 0 0 0 1
1 0 0 0 0 0 I
0 I I
: i : :
42O Chapter 7 AssociationAnalysis: AdvancedConcepts
These rules suggest that most of the users from the age group of 16-24 often
participate in online chatting, while those from the age group of. 44-60 are less
likely to chat online. In this example, we consider a rule to be interesting only
if its support (s) exceeds5% and its confidence (c) exceeds65%. One of the
problems encountered when discretizing the Age attribute is how to determine
the interval width.
1. If the interval is too wide, then we may lose some patterns because of
their lack of confidence. For example, when the interval width is 24
years, .Rr and R2 arc replaced by the following rules:
R ' r , A g e € [ 1 2 , 3 6 )- - + C h a t O n l i n e : Y e s ( s : 3 0 % , c : 5 7 . 7 V o ) .
RL, Age € [36,60) ---+ Chat Online : No (s : 28y0,c: 58.3Vo).
7.2 Handling Continuous Attributes 42I
Despite their higher supports, the wider intervals have caused the con-
fidence for both rules to drop below the minimum confidence threshold.
As a result, both patterns are lost after discretization.
If the interval is too narrow, then we may lose some patterns because of
their lack of support. For example, if the interval width is 4 years, then
.Rr is broken up into the following two subrules:
Since the supports for the subrules are less than the minimum support
threshold, -R1is lost after discretization. Similarly, the rule -Bz, which
is broken up into four subrules, will also be lost because the support of
each subrule is less than the minimum support threshold.
3 . If the interval width is 8 years, then the rule R2 is broken up into the
following two subrules:
Rt?,Age € 144,52)------+
Chat Online : No (s:8.4To,c:70To).
Rf), Age € [52,60) ------+ :
Chat Online No (s:8.4To,c:70T0).
Since El!) and ,t?l| have sufficient support and confidence,R2 can be
recoveredby aggregatingboth subrules. Meanwhile, E1 is broken up
into the following two subrules:
{Amual Incone > $100K, Shop Online = Yes} ----+Age: Mear : 38.
The rule states that the average age of Internet users whose annual income
exceeds $100K and who shop online regularly is 38 years old.
Rule Generation
Rule Validation
l-t'- t-t- L
^2 ^2
(7.1)
rlt n2
that support A, and s2 is the standard deviation for t among transactions that
do not support A. Under the null hypothesis, Z has a standard normal distri-
bution with mean 0 and variance 1. The value of Z comptfted using Equation
7.1 is then compared against a critical value, Zo, which is a threshold that
depends on the desired confidence level. If / ) Za, then the null hypothesis
is rejected and we may conclude that the quantitative association rule is in-
teresting. Otherwise, there is not enough evidence in the data to show that
the difference in mean is statistically significant.
Example 7.1-. Consider the quantitative association rule
------
Age:F:38.
{Income > 100K, Shop Online:Yes}
Supposethere are 50 Internet users who supported the rule antecedent. The
standard deviation of their agesis 3.5. On the other hand, the averageage of
the 200 users who do not support the rule antecedent is 30 and their standard
deviation is 6.5. Assume that a quantitative association rule is considered
interesting only if the difference between p and ;.1/is more than 5 years. Using
Equation 7.1 we obtain
38-30-5
:4.4414.
For a one-sided hypothesis test at a g5% confidence level, the critical value
for rejecting the null hypothesis is 1.64. Since Z > 1.64, the null hypothesis
can be rejected. We therefore conclude that the quantitative association rule
is interesting because the difference between the average ages of users who
support and do not support the rule antecedent is more than 5 years. t
important reason is to ensure that the data is on the same scale so that sets
of words that vary in the same way have similar support values.
In text mining, analysts are more interested in finding associations between
words (e.g., data and nining) instead of associationsbetween ranges of word
frequencies(e.g., data € [1,4] and mining € [2,3]). One way to do this is
to transform the data into a 0/1 matrix, where the entry is 1 if the normal-
ized frequency count exceeds some threshold t, and 0 otherwise. While this
approach allows analysts to apply existing frequent itemset generation algo-
rithms to the binarized data set, finding the right threshold for binarization
can be quite tricky. If the threshold is set too high, it is possible to miss some
interesting associations. Conversely,if the threshold is set too low, there is a
potential for generating a large number of spurious associations.
This section presents another methodology for finding word associations
known as min-Apri,ora. Analogous to traditional association analysis, an item-
set is considered to be a collection of words, while its support measuresthe
degree of association among the words. The support of an itemset can be
computed based on the normalized frequency of its corresponding words. For
example, consider the document d1 shown in Table 7.6. The normalized fre-
quencies for uordl and word2 in this document are 0.3 and 0.6, respectively.
One might think that a reasonableapproach to compute the association be-
tween both words is to take the average value of their normalized frequencies,
i.e., (0.3+0.6)12:0.45. The support of an itemset can then be computed by
summing up the averaged normalized frequencies across all the documents:
q=!f *gry
s({word,1,word,2}): *y# *W#:t.
This result is by no means an accident. Because every word frequency is
normalized to 1, averaging the normalized frequencies makes the support for
every itemset equal to 1. All itemsets are therefore frequent using this ap-
proach, making it uselessfor identifying interesting patterns.
426 Chapter 7 Association Analysis: Advanced Concepts
The support measure defined in min-Apriori has the following desired prop-
erties, which makes it suitable for finding word associationsin documents:
Food Electronics
??"ff:
"olSto,
Figure7.2. Example
of anitemtaxonomy.
nilk is the parent of skin milk because there is a directed edge from the
node milk to the node skim milk. * is called an ancestor of X (and X a
descendent of *) if there is a path from node * to node X in the directed
acyclic graph. In the diagram shown in Figure 7.2, f ood is an ancestor of skim
rnil-k and AC adaptor is a descendentof electronics.
The main advantages of incorporating concept hierarchies into association
analysis are as follows:
1. Items at the lower levels of a hierarchy may not have enough support to
appear in any frequent itemsets. For example, although the sale of AC
adaptors and docking stations may be low, the sale of laptop accessories,
which is their parent node in the concept hierarchy, may be high. Unless
the concept hierarchy is used, there is a potential to miss interesting
patterns involving the laptop accessories.
Timeline
l rbt tlt bt l z r ' a r ' S s b
rttl
Sequencefor ttlltl
ObjectA: tttrrtl
rt,rtl
? i 61i i i
3i1irl
Sequencefor
ObjectB:
Sequencefor
ObjectC:
1
7
8
Figure
7.3.Example
ofa sequence
database,
for object A. By sorting all the events associated with object A in increasing
order of their timestamps, a sequence for object A is obtained, as shown on
the right-hand side of Figure 7.3.
Generally speaking, a sequenceis an ordered list of elements. A sequence
can be denoted as s : (ep2es . . .en),, where each element e3 is a collection of
one or more events,i.e., ej : {h,'i2, . . . ,26}. The following is a list of examples
of sequences:
Subsequences
OrdinalAttribute
Figure7.4. Examples
of elements
andeventsin sequence
datasets,
pafterns
Figure7.5. Sequential derived fivedatasequences.
froma datasetthatcontains
distinct sequences.
7.4 Sequential Patterns 433
l-sequences:1ir ), f i2 ), . . ., 1 xn )
2-sequences:< {h,iz} >, < {rir,ze}), . . ., 1 {in_t,'in} },
< {zr}{ir} >, < {it}{iz} ),..., < {i-_t}{i-} >
3-sequences:I {i1,'i2,is}>, < {ir,,iz,iq}}, ..., 1 {,h,iz}{ir} ), ...,
< { i t } { z r ,i z } > , . . . , < { z t } { i r } { r t } > , . . . , < { i " } { i . } { i ^ } >
1. An item can appear at most once in an itemset, but an event can appear
more than once in a sequence.Given a pair of items, ir and i2, only one
candidate 2-itemset, {h,iz}, can be generated. On the other hand, there
are many candidate 2-sequences, such as ( {i1,iz} >, < {it}{iz} >,
< {iz}{it} ), and 1{h,it} >, that can be generated.
2. Order matters in sequences,but not for itemsets. For example, {1, 2} and
{2,1} refersto the sameitemset,whereas< {it}{iz} ) and < {iz}{i} >
correspond to different sequences,and thus must be generated separately.
The Apriori principle holds for sequential data becauseany data sequence
that contains a particular k-sequencemust also contain all of its (k - 1)-
subsequences.An Apri,ori,-Iike algorithm can be developed to extract sequen-
tial patterns from a sequencedata set. The basic structure of the algorithm
is shown in Algorithm 7.1.
Notice that the structure of the algorithm is almost identical to Algorithm
6.1 presented in the previous chapter. The algorithm would iteratively gen-
erate new candidate k-sequences,prune candidates whose (k - l)-sequences
are infrequent, and then count the supports of the remaining candidates to
identify the sequential patterns. The detailed aspects of these steps are given
next.
for sequences.The criteria for merging sequencesare stated in the form of the
following procedure.
A sequence s(1) is merged with another sequence s(2) only if the subsequence
obtained by dropping the first event in s(1) is identical to the subsequence
obtained by dropping the last event in s(2). The resulting candidate is the
sequence5(1), concatenated with the last event from s(2). The last event from
s(2) can either be merged into the same element as the last event in s(1) or
different elements depending on the following conditions:
1. If the last two events in s(2) belong to the same element, then the last event
in s(2) is part of the last element in s(1) in the merged sequence.
2. If.the last two events in s(2) belong to different elements, then the last event
in s(2) becomes a separate element appended to the end of s(1) in the
merged sequence.
Frequent
3-sequences
Figure7,6, Example
of thecandidate andpruning
generation pattern
stepsof a sequential mining
algorithm.
since events 3 and 4 belong to the same element of the second sequence,they
are combined into the same element in the merged sequence.Finally, the se-
quences({1}{2}{3}) and ({t}{2,5}) do not have to be mergedbecauseremov-
ing the fi.rst event from the first sequencedoes not give the same subsequence
as removing the last event from the secondsequence.Although ({1}{2,5}t3})
is a viable candidate, it is generated by merging a different pair of sequences,
({1}{2,5}) and ({2,5}{3}). This example shows that the sequencemerging
procedure is complete; i.e., it will not miss any viable candidate, while at the
same time, it avoids generating duplicate candidate sequences.
u(s;*r)-l(s;)<= maxgap
l(s;+r)-u(b;)>mingap windowsize
WS
e
Sequence:
Figure7.7.Timing
constraints pattern.
of a sequential
The marspan constraint specifies the maximum allowed time difference be-
tween the latest and the earliest occurrencesof events in the entire sequence.
For example, suppose the following data sequences contain events that oc-
cur at consecutive time stamps (1, 2, 3, . ..). Assuming that marspan : 3,
the following table contains sequential patterns that are supported and not
supported by a given data sequence.
In general, the longer the marspan, the more likely it is to detect a pattern
in a data sequence.However, a longer marspan can also capture spurious pat-
terns becauseit increasesthe chancefor two unrelated events to be temporally
related. In addition, the pattern may involve events that are already obsolete.
The marspan constraint affects the support counting step of sequential
pattern discovery algorithms. As shown in the preceding examples,some data
sequencesno longer support a candidate pattern when the marspan constraint
is imposed. If we simply apply Algorithm 7.7., the support counts for some
patterns may be overestimated. To avoid this problem, the algorithm must be
modified to ignore cases where the interval between the first and last occur-
rences of events in a given pattern is greater t'han marspan.
Timing constraints can also be specified to restrict the time difference be-
tween two consecutive elements of a sequence. If the maximum time difference
(margap) is one week, then events in one element must occur within a week's
time of the events occurring in the previous element. If the minimum time dif-
ference (mi,ngap) is zero, then events in one element must occur immediately
after the events occurring in the previous element. The following table shows
examples of patterns that pass or fail the margap and mingap constraints,
assuming that margap: 3 and mi,ngap: t.
As with marspan) these constraints will affect the support counting step
of sequential pattern discovery algorithms because some data sequencesno
longer support a candidate pattern when mingap and rnargap constraints are
present. These algorithms must be modified to ensure that the timing con-
straints are not violated when counting the support of a pattern. Otherwise,
some infrequent sequencesmay mistakenly be declared as frequent patterns.
A side effect of using the margap constraint is that the Apri,ori principle
might be violated. To illustrate this, consider the data set shown in Figure
7.5. Without mingap or rnargap constraints, the support for ({Z}{S}) and
({2}{3}{5}) are both equal to 60%. However,if mi,ngap: 0 and margap: L,
then the support for ({Z}{5}) reducesto 40To,while the support for ({Z}{a}{S})
is still 60%. In other words, support has increasedwhen the number of events
in a sequenceincreases-which contradicts the Apri,ori, principle. The viola-
tion occurs becausethe object D does not support the pattern ({2}{5}) since
the time gap between events 2 and 5 is greater than rnargap. This problem
can be avoided by using the concept of a contiguous subsequence.
Definition 7.2 (Corftiguous Subsequence). A sequences is a contiguous
subsequenceof w - \e1e2...ek) if any one of the following conditions hold:
Finally, events within an element s7 do not have to occur at the same time. A
window size threshold (tr.'s)can be defined to specify the maximum allowed
time difference between the latest and earliest occurrences of events in any
element of a sequential pattern. A window size of 0 means all events in the
same element of a pattern must occur simultaneously.
The following example uses u)s : 2 to determine whether a data se-
quence supports a given sequence(assuming mingap : 0, margap : 3, and
marspan: -).
In the last example, although the pattern ({1,3,4} {6,7,8}) satisfiesthe win-
dow size constraint, it violates the margap constraint becausethe maximum
time difference between events in the two elements is 5 units. The window
size constraint also affects the support counting step of sequential pattern dis-
covery algorithms. If Algorithm 7.I is applied without imposing the window
size constraint, the support counts for some of the candidate patterns might
be underestimated, and thus some interesting patterns may be lost.
There are several methods available for counting the support of a candidate
k-sequencefrom a database of sequences.For illustrative purposes, consider
the problem of counting the support for sequenc" ({p}{q}), as shown in Figure
7.8. Assume that ?rs : 0, mingap : 0, margap: 1, and marspan :2.
44O Chapter 7 Association Analysis: Advanced Concepts
Object'sTimeline
ppp (p)(q)
Sequence:
ppqqqqq
t_-_t-_______.1__ ]_
, ---'i---------'t-- (Method,Count)
i1 i2 i3 i4 i5 i6 i7
#iiii COBJ
J cwlN
Hi
itH | "r'**'*
CDISTO 8
CDIST 5
Figure7.8. Comparing
different
support methods.
counting
window is the time interval such that the sequenceoccurs in that time
interval, but it does not occur in any of the proper subintervals of it. This
definition can be considered as a restrictive version of CWIN, because
its effect is to shrink and collapsesome of the windows that are counted
by CWIN. For example, sequenc" ({p}{q}) has four minimal window
-- 3, q:
occurrences:(1) the pair (p: t :2, q'. t :3)' (2) the pair (p'. t
t:4), (3) the pair (p: t:5, Q:t:6), and (a) the pair (p: t:6, q:
t:7). The occurrenceof eventp at t:1 and event q at t :3 is not a
minimal window occurrence because it contains a smaller window with
(p: t:2, g: t: 3), which is indeed a minimal window of occurrence'
One final point regarding the counting methods is the need to determine the
baselinefor computing the support measufe. For frequent itemset mining, the
baseline is given by the total number of transactions. For sequential pattern
mining, the baseline depends on the counting method used. For the COBJ
method, the total number of objects in the input data can be used as the
baseline. For the CWIN and CMINWIN methods, the baselineis given by the
sum of the number of time windows possible in all objects. For methods such
as CDIST and CDIST_O, the baseline is given by the sum of the number of
distinct timestamps present in the input data of each object.
442 Chapter 7 Association Analysis: Advanced Concepts
Table
7.7.Graph
representation
ofentities
invarious
application
domains.
Figure
7.9,Example
ofa subgraph.
Subgraph9.,
aO'-{'e
= 80%
support
Subgraphg,
af,J___ad
/t
Oe
= 60%
support
Figure7.10.Computing
thesupport froma setofgraphs.
of a subgraph
Example 7.2. Consider the five graphs, G1 through Gr, shown in Figure
7.10. The graph pr shown on the top right-hand diagram is a subgraph of G1,
Gs, G+, and Gs. Therefore, s(gr) : 415 : 80%. Similarly, we can show that
s(gz) :60% because92 is a subgraph of G1, G2, and G3, while s(gs) : lO%
because93 is a subgraph of G1 and G3. I
This section presentsa formal definition of the frequent subgraph mining prob-
Iem and illustrates the complexity of this task.
Definition 7.6 (Flequent Subgraph Mining). Given a set of graphs f
and a support threshold, m'insup, the goal of frequent subgraph mining is to
find all subgraphs g such that s(9) > rnxnsup.
While this formulation is generally applicable to any type of graph, the
discussion presented in this chapter focuses primarily on undirected, con-
nected graphs. The definitions of these graphs are given below:
1. A graph is connected ifthere exists a path between every pair ofvertices
in the graph, in which a path is a sequenceof vertic€s 1u1u2...ute )
7.5 Subgraph Patterns 445
I(;) x 2i(i-r)/2 )
where (f) is tne number of ways to choose i vertices to form a subgraph and
2i(i-L)12is the maximum number of edgesbetween vertices. Table 7.8 compares
the number of itemsets and subgraphs for different values of d.
G1 G2 G4
k=1 @ @
Dn Q^
€_fi9 €H9
D^ o^
€HD €H9
o^ o^
QHD (9HU
k=3
c+.4
\I' €J
Figure7.'11,Brute{orce
method
formining
frequent
subgraphs.
2. The same pair of vertex labels can have multiple choicesof edge labels.
G1 G2