0% found this document useful (0 votes)
39 views56 pages

DM Unit-1-1

The document discusses the necessity and significance of data mining in the context of the explosive growth of data across various sectors. It defines data mining as the extraction of useful patterns from large datasets and outlines the knowledge discovery process, including data selection, cleaning, and integration. Additionally, it covers data objects, attribute types, statistical descriptions, and visualization techniques to analyze and interpret data effectively.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views56 pages

DM Unit-1-1

The document discusses the necessity and significance of data mining in the context of the explosive growth of data across various sectors. It defines data mining as the extraction of useful patterns from large datasets and outlines the knowledge discovery process, including data selection, cleaning, and integration. Additionally, it covers data objects, attribute types, statistical descriptions, and visualization techniques to analyze and interpret data effectively.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 56

Concepts and

Techniques

— Chapter 1,2 —

1
Why Data Mining?
 The Explosive Growth of Data: from terabytes to petabytes

Data collection and data availability

Automated data collection tools, database systems, Web,
computerized society

Major sources of abundant data

Business: Web, e-commerce, transactions, stocks, …

Science: Remote sensing, bioinformatics, scientific
simulation, …

Society and everyone: news, digital cameras, YouTube
 We are drowning in data, but starving for knowledge!
 “Necessity is the mother of invention”—Data mining—
Automated analysis of massive data sets
2
What Is Data Mining?

 Data mining (knowledge discovery from data)


 Extraction of interesting (non-trivial, implicit, previously
unknown and potentially useful) patterns or knowledge
from huge amount of data
 Data mining: a misnomer?
 Alternative names
 Knowledge discovery (mining) in databases (KDD),
knowledge extraction, data/pattern analysis, data
archeology, data dredging, information harvesting,
business intelligence, etc.
 Watch out: Is everything “data mining”?
 Simple search and query processing
 (Deductive) expert systems
3
Knowledge Discovery (KDD) Process
 This is a view from typical
database systems and data
Pattern Evaluation
warehousing communities
 Data mining plays an
essential role in the
knowledge discovery process Data Mining

Task-relevant Data

Data Selection
Warehouse
Data Cleaning

Data Integration

Databases
4
Chapter 2: Getting to Know Your
Data

 Data Objects and Attribute Types

 Basic Statistical Descriptions of Data

 Data Visualization

 Measuring Data Similarity and Dissimilarity

 Summary

5
Types of Data Sets
 Record
 Relational records
 Data matrix

timeout

season
coach

game
score
team

ball

lost
pla

wi
n
y
 Document data: text documents.
 Transaction data
Document 1 3 0 5 0 2 6 0 2 0 2
 Graph and network
Document 2 0 7 0 2 1 0 0 3 0 0
 World Wide Web
 Social or information networks Document 3 0 1 0 0 1 2 2 0 3 0

 Ordered
TID Items
 Video data: sequence of images
 Temporal data: time-series 1 Bread, Coke, Milk
2 Beer, Bread
 Spatial, image and multimedia: 3 Beer, Coke, Diaper, Milk
 Spatial data: maps 4 Beer, Bread, Diaper, Milk
 Image and Video data: 5 Coke, Diaper, Milk

6
Data Objects

 Data sets are made up of data objects.


 A data object represents an entity.
 Examples:
 sales database: customers, store items, sales

medical database: patients, treatments

university database: students, professors, courses
 Also called samples , examples, instances, data
points, objects, tuples.
 Data objects are described by attributes.
 Database rows -> data objects; columns -
>attributes.
7
Attributes
 Attribute (or dimensions, features,
variables): a data field, representing a
characteristic or feature of a data object.
 E.g., customer _ID, name, address

 Types:
 Nominal

 Binary

 Numeric: quantitative


Interval-scaled

Ratio-scaled
8
Attribute Types
 Nominal: categories, states, or “names of things”
 Hair_color = {auburn, black, blond, brown, grey, red,
white}
 marital status, occupation, ID numbers, zip codes
 Binary
 Nominal attribute with only 2 states (0 and 1)
 Symmetric binary: both outcomes equally important

e.g., gender
 Asymmetric binary: outcomes not equally important.

e.g., medical test (positive vs. negative)

Convention: assign 1 to most important outcome
(e.g., HIV positive)
 Ordinal
 Values have a meaningful order (ranking) but magnitude
between successive values is not known.
 Size = {small, medium, large}, grades, army rankings
9
Numeric Attribute Types
 Quantity (integer or real-valued)
 Interval

Measured on a scale of equal-sized units

Values have order
 E.g., temperature in C˚or F˚, calendar
dates
 Ratio

Inherent zero-point

We can speak of values as being an order of
magnitude larger than the unit of
measurement (10 K˚ is twice as high as 5
K˚).
 e.g., temperature in Kelvin, length,
counts, monetary quantities
10
Discrete vs. Continuous
Attributes
 Discrete Attribute
 Has only a finite or countably infinite set of values


E.g., zip codes, profession, or the set of words in
a collection of documents
 Sometimes, represented as integer variables

 Note: Binary attributes are a special case of

discrete attributes
 Continuous Attribute
 Has real numbers as attribute values


E.g., temperature, height, or weight
 Practically, real values can only be measured and

represented using a finite number of digits


 Continuous attributes are typically represented as

floating-point variables
11
Chapter 2: Getting to Know Your
Data

 Data Objects and Attribute Types

 Basic Statistical Descriptions of Data

 Data Visualization

 Measuring Data Similarity and Dissimilarity

 Summary

12
Basic Statistical Descriptions of
Data
 Motivation
 To better understand the data: central tendency,
variation and spread
 Data dispersion characteristics
 median, max, min, quantiles, outliers, variance, etc.
 Numerical dimensions correspond to sorted intervals
 Data dispersion: analyzed with multiple granularities
of precision
 Boxplot or quantile analysis on sorted intervals
 Dispersion analysis on computed measures
 Folding measures into numerical dimensions
 Boxplot or quantile analysis on the transformed
cube
13
Measuring the Central Tendency
 Mean (algebraic measure) (sample vs. population): 1 n
x   xi   x
Note: n is sample size and N is population size. n i 1 N
n
 Weighted arithmetic mean:
w x i i
 Trimmed mean: chopping extreme valuesx  i 1
n
 Median: w
i 1
i

 Middle value if odd number of values, or


average of the middle two values otherwise

 Mode
 Value that occurs most frequently in the data
 Unimodal, bimodal, trimodal
 Empirical formula:
mean  mode 3 (mean  median)
14
Symmetric vs.
Skewed Data
 Median, mean and mode of
symmetric, positively and symmetric
negatively skewed data

positively skewed
negatively skewed

15
Measuring the Dispersion of
Data
 Quartiles, outliers and boxplots
 Quartiles: Q1 (25th percentile), Q3 (75th percentile)
 Inter-quartile range: IQR = Q3 – Q1
 Five number summary: min, Q1, median, Q3, max
 Boxplot: ends of the box are the quartiles; median is marked; add
whiskers, and plot outliers individually
 Outlier: usually, a value higher/lower than 1.5 x IQR
 Variance and standard deviation
 Variance: (algebraic,n scalable computation)
1 1 n 2
   ( xi   )   xi   2
2 2

N i 1 N i 1

 Standard deviation σ is the square root of variance s2 (or σ2)

16
Boxplot Example
Example 1: Draw a box-and-whisker plot for
the data set {3, 7, 8, 5, 12, 14, 21, 13, 18}.
Data set=[3, 5, 7, 8, 12, 13, 14, 18, 21]

Minimum: 3
Q : 6
1
Median: 12
Q : 16
3
 Maximum: 21

17
Properties of Normal Distribution
Curve

 The normal (distribution) curve


 From μ–σ to μ+σ: contains about 68% of the

measurements (μ: mean, σ: standard deviation)


 From μ–2σ to μ+2σ: contains about 95% of it
 From μ–3σ to μ+3σ: contains about 99.7% of it

18
Graphic Displays of Basic Statistical
Descriptions

 Boxplot: graphic display of five-number summary


 Histogram: x-axis are values, y-axis repres.
frequencies
 Scatter plot: each pair of values is a pair of
coordinates and plotted as points in the plane

19
Histogram Analysis
 Histogram: Graph display of
tabulated frequencies, shown as 40
bars 35
 It shows what proportion of cases
30
fall into each of several categories
25
 Differs from a bar chart in that it
is the area of the bar that denotes 20
the value, not the height as in bar 15
charts, a crucial distinction when
the categories are not of uniform 10
width 5
 The categories are usually 0
specified as non-overlapping 10000 30000 50000 70000 90000

intervals of some variable. The


categories (bars) must be
adjacent
20
Histograms Often Tell More than
Boxplots

 The two histograms


shown in the left
may have the same
boxplot
representation
 The same values
for: min, Q1,
median, Q3, max
 But they have
rather different data
distributions

21
Positively and Negatively Correlated
Data

Positive negative

Uncorrelated data

22
Uncorrelated Data

23
Chapter 2: Getting to Know Your
Data

 Data Objects and Attribute Types

 Basic Statistical Descriptions of Data

 Data Visualization

 Measuring Data Similarity and Dissimilarity

 Summary

24
Data Visualization
 Why data visualization?
 Gain insight into an information space by mapping data onto
graphical primitives
 Provide qualitative overview of large data sets
 Search for patterns, trends, structure, irregularities, relationships
among data
 Help find interesting regions and suitable parameters for further
quantitative analysis
 Provide a visual proof of computer representations derived
 Categorization of visualization methods:
 Pixel-oriented visualization techniques
 Geometric projection visualization techniques
 Icon-based visualization techniques
 Hierarchical visualization techniques
 Visualizing complex data and relations
25
Pixel-Oriented Visualization
Techniques
 For a data set of m dimensions, create m windows on the
screen, one for each dimension
 The m dimension values of a record are mapped to m pixels
at the corresponding positions in the windows
 The colors of the pixels reflect the corresponding values

(a) Income (b) Credit (c) transaction (d) age


Limit volume 26
Geometric Projection Visualization
Techniques

 Visualization of geometric transformations and


projections of the data

 Methods
 Scatterplot and scatterplot matrices
 Parallel coordinates

27
Scatterplot Matrices

Used by ermission of M. Ward, Worcester Polytechnic Institute

Matrix of scatterplots (x-y-diagrams) of the k-dim. data

28
Parallel Coordinates
 n equidistant axes which are parallel to one of the screen
axes and correspond to the attributes
 The axes are scaled to the [minimum, maximum]: range of
the corresponding attribute
 Every data item corresponds to a polygonal line which
intersects each of the axes at the point which corresponds to
the value for the attribute

• • •

Attr. 1 Attr. 2 Attr. 3 Attr. k


29
Icon-Based Visualization
Techniques

 Visualization of the data values as features of icons


 Typical visualization methods
 Chernoff Faces

 General techniques
 Shape coding: Use shape to represent certain
information encoding
 Color icons: Use color icons to encode more
information
 Tile bars: Use small icons to represent the
relevant feature vectors in document retrieval
30
Chernoff Faces
 A way to display variables on a two-dimensional surface, e.g.,
let x be eyebrow slant, y be eye size, z be nose length, etc.

 The figure shows faces produced


using 10 characteristics--head
eccentricity, eye size, eye spacing,
eye eccentricity, pupil size,
eyebrow slant, nose size, mouth
shape, mouth size, and mouth
opening): Each assigned one of 10
possible values, generated using
Mathematica (S. Dickson)

31
Hierarchical Visualization
Techniques

 Visualization of the data using a


hierarchical partitioning into subspaces

 Methods
 Dimensional Stacking
 Worlds-within-Worlds
 Tree-Map

32
Dimensional Stacking

attribute 4
attribute 2

attribute 3

attribute 1

 Partitioning of the n-dimensional attribute space in 2-D


subspaces, which are ‘stacked’ into each other
 Partitioning of the attribute value ranges into classes. The
important attributes should be used on the outer levels.
 Adequate for data with ordinal attributes of low cardinality
 But, difficult to display more than nine dimensions
 Important to map dimensions appropriately

33
Dimensional Stacking
Used by permission of M. Ward, Worcester Polytechnic Institute

Visualization of oil mining data with longitude and latitude mapped to


the outer x-, y-axes and ore grade and depth mapped to the inner x-, y-
axes
34
Worlds-within-Worlds
“Worlds-within-Worlds,” also known as n-Vision, is a representative hierarchical
visualization method. Suppose we want to visualize a 6-D F,X1,...,X5 to observe
how dimension F changes with respect to the other dimensions:

 Firstly: fix the values of dimensions X3,X4,X5 to some selected


values, c3,c4,c5.

 Secondly: visualize F,X1,X2 using a 3-D plot, the position of the


origin of the inner world is located at the point (c3,c4,c5).

35
Tree-Map
 Screen-filling method which uses a hierarchical partitioning
of the screen into regions depending on the attribute values
 The x- and y-dimension of the screen are partitioned
alternately according to the attribute values (classes)

MSR Netscan Image

Ack.: 36
Visualizing Complex Data and
Relations
 Visualizing non-numerical data: text and social networks
 Tag cloud: visualizing user-generated tags


The importance
of tag is
represented by
font size/color
 Besides text data,
there are also
methods to visualize
relationships, such
as visualizing social
networks

Newsmap: Google News Stories in


Chapter 2: Getting to Know Your
Data

 Data Objects and Attribute Types

 Basic Statistical Descriptions of Data

 Data Visualization

 Measuring Data Similarity and Dissimilarity

 Summary

38
Similarity and Dissimilarity
 Similarity
 Numerical measure of how alike two data objects

are
 Value is higher when objects are more alike

 Often falls in the range [0,1]

 Dissimilarity (e.g., distance)


 Numerical measure of how different two data

objects are
 Lower when objects are more alike

 Minimum dissimilarity is often 0

 Upper limit varies

 Proximity refers to a similarity or dissimilarity


39
Data Matrix and Dissimilarity
Matrix
 Data matrix
 n data points with p  x
11 ... x1f ... x1p 
 
dimensions  ... ... ... ... ... 
x ... xif ... xip 
 i1 
 ... ... ... ... ... 
x ... xnf ... xnp 
 Dissimilarity matrix  n1 
 n data points, but
registers only the  0 

distance  d(2,1) 0 
 A triangular matrix  d(3,1) d ( 3,2) 0 
 
 : : : 
 d ( n,1) d ( n,2) ... ... 0

40
Proximity Measure for Nominal
Attributes
 Can take 2 or more states, e.g., red, yellow,
blue, green:
d (i, j)  p p m
Where p: is the number of attribute and
m: is total of matches

1 Red A P1 Example:
2 Green B P2 d(1,4)=(3-2)/3=0.33
3 Blue B P2 where p=3
4 Red A P3 m=2

41
Proximity Measure for Binary
Attributes
The dissimilarity (Distance) of two objects that represented by binary attributes
can be measured as in the following:

 Distance measure for symmetric binary


variables.
 Distance measure for asymmetric
binary variables.

Where
q: is the number of attributes that equal 1 for both objects i and j.
r: is the number of attributes that equal 1 for object i and equal 0 for object j.
s: is the number of attributes that equal 0 for object i and equal 1 for object j.
t: is the number of attributes that equal 0 for both objects i and j.

42
Dissimilarity between Binary
Variables
 Example
Name Gender Fever Cough Test-1 Test-2 Test-3 Test-4
Jack M Y N P N N N
Mary F Y N P N P N
Jim M Y P N N N N
 Gender is a symmetric attribute
 The remaining attributes are asymmetric binary
 Let the values Y and P be 1, and the value N 0
0 1
d ( jack , mary )  0.33
2  0 1
11
d ( jack , jim )  0.67
111
1 2
d ( jim , mary )  0.75
11 2
43
Example:
Data Matrix and Dissimilarity Matrix
Data Matrix
x2 x4
point attribute1 attribute2
4 x1 1 2
x2 3 5
x3 2 0
x4 4 5
2 x1
Dissimilarity Matrix
(with Euclidean Distance)
x3
0 4 x1 x2 x3 x4
2
x1 0
x2 3.61 0
x3 5.1 5.1 0
x4 4.24 1 5.39 0

44
Distance on Numeric Data: Minkowski
Distance
 Minkowski distance: A popular distance measure

where i = (xi1, xi2, …, xip) and j = (xj1, xj2, …, xjp) are


two p-dimensional data objects, and h is the order
(the distance so defined is also called L-h norm)
 A distance that satisfies these properties is a metric

d(i, j) > 0 if i ≠ j, and d(i, i) = 0 (Positive
definiteness)

d(i, j) = d(j, i) (Symmetry)

d(i, j)  d(i, k) + d(k, j) (Triangle Inequality)

45
Special Cases of Minkowski Distance

 h = 1: Manhattan (city block, L1 norm) distance


 E.g., the Hamming distance: the number of bits that are
different between two binary vectors
d (i, j) | x  x |  | x  x | ... | x  x |
i1 j1 i2 j 2 ip jp

 h = 2: (L2 norm) Euclidean distance


d (i, j)  (| x  x |2  | x  x |2 ... | x  x |2 )
i1 j1 i2 j 2 ip jp

46
Example: Minkowski Distance
Dissimilarity Matrices
point attribute 1 attribute 2 Manhattan
x1 1 2 (L1)L x1 x2 x3 x4
x2 3 5 x1 0
x3 2 0 x2 5 0
x4 4 5 x3 3 6 0
x4 6 1 7 0

x2 x4

Euclidean (L2)
L2 x1 x2 x3 x4
2 x1 x1 0
x2 3.61 0
x3 2.24 5.1 0
x4 4.24 1 5.39 0
x3
0 2 4
47
Ordinal Variables

 An ordinal variable can be discrete or continuous


 Order is important, e.g., rank
 Can be treated like interval-scaled
 replace x by their rank rif {1,..., M f }
if

 map the range of each variable onto [0, 1] by


replacing i-th object in the f-th variable by
rif  1
zif 
Mf 1

 compute the dissimilarity using methods for


interval-scaled variables

48
Ordinal Variables Example
Object Test 1 Rank Normalize Dissimilarity matrix:
identifie (ordinal) rif zif
r
0
1 excellent 3 1
1 0
2 fair 1 0
0.5 0.5 0
3 good 2 0.5
0 1 0.5 0
4 excellent 3 1
There are three states for test-2: fair, good, and excellent, that is, Mf = 3.
step 1, replace each value for test-1 by its rank

Step 2 normalizes the ranking by mapping rank 1 to 0.0, rank 2 to 0.5, and
rank 3 to 1.0.
Use Manhattan distance to measure the similarity

49
Attributes of Mixed Type
 A database may contain all attribute types Nominal, symmetric binary, asymmetric
binary, numeric, ordinal. Suppose that the data set contains p attributes of mixed type.
The dissimilarity d(i,j) between objects i and j is defined as

 pf 1 ij( f ) dij( f )


d (i, j)  p
 f 1 ij( f )

δij(f ) =0 if (1) xif or xjf is missing or (2) xif = xjf =0 and attribute f is asymmetric
binary otherwise δij(f ) =1


If f is binary or nominal: dij(f) = 0 if xif = xjf , or dij(f) = 1 otherwise

 If f is numeric use the normalized distance

zif 
r 1
if

If f is ordinal: Compute ranks rif and Treat zif as interval-scaled M 1 f

50
Attributes of Mixed Type
Id Test1 Test 2 Test 3
(Nominal) (Ordinal) (numeric)
1 Code A excellent 45
2 Code B fair 22
3 Code C good 64
4 Code A excellent 28


for the third attribute, test-3 (which is numeric) maxhxh =64 and minhxh =22

The indicator δ(f )ij = 1 for each of the three attributes

51
Cosine Similarity
 A document can be represented by thousands of attributes, each
recording the frequency of a particular word (such as keywords)
or phrase in the document.

 Other vector objects: gene features in micro-arrays, …


 Applications: information retrieval, biologic taxonomy, gene
feature mapping, ...
 Cosine measure: If d1 and d2 are two vectors (e.g., term-frequency
vectors), then
cos(d1, d2) = (d1  d2) /||d1|| ||d2|| ,
where  indicates vector dot product, ||d||: the length of vector
d
52
Example: Cosine Similarity
 cos(d1, d2) = (d1  d2) /||d1|| ||d2|| ,
where  indicates vector dot product, ||d|: the length of vector d

 Ex: Find the similarity between documents 1 and 2.

d1 = (5, 0, 3, 0, 2, 0, 0, 2, 0, 0)
d2 = (3, 0, 2, 0, 1, 1, 0, 1, 0, 1)

d1d2 = 5*3+0*0+3*2+0*0+2*1+0*1+0*1+2*1+0*0+0*1 = 25
||d1||= (5*5+0*0+3*3+0*0+2*2+0*0+0*0+2*2+0*0+0*0)0.5=(42)0.5
= 6.481
||d2||= (3*3+0*0+2*2+0*0+1*1+1*1+0*0+1*1+0*0+1*1)0.5=(17)0.5
= 4.12
cos(d1, d2 ) = 0.94

53
Chapter 2: Getting to Know Your
Data

 Data Objects and Attribute Types

 Basic Statistical Descriptions of Data

 Data Visualization

 Measuring Data Similarity and Dissimilarity

 Summary

54
Summary
 Data attribute types: nominal, binary, ordinal, interval-scaled,
ratio-scaled
 Many types of data sets, e.g., numerical, text, graph, Web,
image.
 Gain insight into the data by:
 Basic statistical data description: central tendency,
dispersion, graphical displays
 Data visualization: map data onto graphical primitives
 Measure data similarity
 Above steps are the beginning of data preprocessing.
 Many methods have been developed but still an active area of
research.

55
Reference
 Han Jiawei & Kamber Micheline. (2012). Data Mining: Concepts and
Techniques, 3nd edition, Morgan Kaufman.

56

You might also like