Data Pre Processing
Data Pre Processing
Data Pre Processing
Data Preprocessing
Why preprocess the data?
Data cleaning
Data integration and transformation
Data reduction
Discretization and concept hierarchy
generation
Summary
Why Data
Preprocessing?
Data in the real world is dirty
incomplete: lacking attribute values, lacking certain
attributes of interest, or containing only aggregate data
noisy: containing errors or outliers
inconsistent: containing discrepancies in codes or
names
No quality data, no quality mining results!
Quality decisions must be based on quality data
Data warehouse needs consistent integration of quality
data
Required for both OLAP and Data Mining!
Why can Data be
Incomplete?
Attributes of interest are not available (e.g.,
customer information for sales transaction data)
Data were not considered important at the time
of transactions, so they were not recorded!
Data not recorder because of misunderstanding
or malfunctions
Data may have been recorded and later deleted!
Missing/unknown values for some data
Why can Data be
Noisy/Inconsistent?
Faulty instruments for data collection
Human or computer errors
Errors in data transmission
Technology limitations (e.g., sensor data come at
a faster rate than they can be processed)
Inconsistencies in naming conventions or data
codes (e.g., 2/5/2002 could be 2 May 2002 or 5
Feb 2002)
Duplicate tuples, which were received twice
should also be removed
Major Tasks in Data
outliers=exceptions!
Preprocessing
Data cleaning
Fill in missing values, smooth noisy data, identify or remove
outliers, and resolve inconsistencies
Data integration
Integration of multiple databases, data cubes, or files
Data transformation
Normalization and aggregation
Data reduction
Obtains reduced representation in volume but produces the
same or similar analytical results
Data discretization
Part of data reduction but with particular importance,
especially for numerical data
Forms of data preprocessing
Data Preprocessing
Equi-width
binning: 0-10 10-20 20-30 30-40 40-50 50-60 60-70 70-80
Equi-width
binning: 0-22 22-31 62-80
38-44 48-55
32-38 44-48 55-62
Smoothing using Binning
Methods
* Sorted data for price (in dollars): 4, 8, 9, 15, 21, 21, 24, 25,
26, 28, 29, 34
* Partition into (equi-depth) bins:
- Bin 1: 4, 8, 9, 15
- Bin 2: 21, 21, 24, 25
- Bin 3: 26, 28, 29, 34
* Smoothing by bin means:
- Bin 1: 9, 9, 9, 9
- Bin 2: 23, 23, 23, 23
- Bin 3: 29, 29, 29, 29
* Smoothing by bin boundaries: [4,15],[21,25],[26,34]
- Bin 1: 4, 4, 4, 15
- Bin 2: 21, 21, 25, 25
- Bin 3: 26, 26, 26, 34
Cluster
Analysis
salary
cluster
outlier
age
Regression
y (salary)
Example of linear regression
Y1 y=x+1
X1 x (age)
Inconsistent Data
Inconsistent data are handled by:
Manual correction (expensive and tedious)
Use routines designed to detect inconsistencies
and manually correct them. E.g., the routine may
use the check global constraints (age>10) or
functional dependencies
Other inconsistencies (e.g., between names of
the same attribute) can be corrected during the
data integration process
Data Preprocessing
Why preprocess the data?
Data cleaning
Data integration and transformation
Data reduction
Discretization and concept hierarchy
generation
Summary
Data
Integration
Data integration:
combines data from multiple sources into a coherent store
Schema integration
integrate metadata from different sources
metadata: data about the data (i.e., data descriptors)
Entity identification problem: identify real world entities
from multiple data sources, e.g., A.cust-id B.cust-#
Detecting and resolving data value conflicts
for the same real world entity, attribute values from
different sources are different (e.g., J.D.Smith and Jonh
Smith may refer to the same person)
possible reasons: different representations, different
scales, e.g., metric vs. British units (inches vs. cm)
Handling
Redundant
Redundant Data
data occur often whenin integration of
Data
multiple Integration
databases
The same attribute may have different names in different
databases
One attribute may be a derived attribute in another
table, e.g., annual revenue
Redundant data may be able to be detected by
correlation analysis
Careful integration of the data from multiple
sources may help reduce/avoid redundancies and
inconsistencies and improve mining speed and
quality
Data
Transformation
Smoothing: remove noise from data
Aggregation: summarization, data cube construction
Generalization: concept hierarchy climbing
Normalization: scaled to fall within a small, specified
range
min-max normalization
z-score normalization
normalization by decimal scaling
Attribute/feature construction
New attributes constructed from the given ones
Normalization: Why
normalization?
Speeds-up some learning techniques (ex.
neural networks)
Helps prevent attributes with large ranges
outweigh ones with small ranges
Example:
income has range 3000-200000
age has range 10-80
A1? A6?
s s y
lo
Original Data
Approximated
Principal Component Analysis
or Karhuren-Loeve (K-L)
method
Given N data vectors from k-dimensions, find c
X1
of data
parameters, store only the parameters, and discard the
data (except possible outliers)
Log-linear models: obtain value at a point in m-D
space as the product on appropriate marginal
subspaces
Non-parametric methods
Do not assume models
Major families: histograms, clustering, sampling
Histograms
A popular data
reduction technique
Divide data into
buckets and store
average (or sum) for
each bucket
Can be constructed
optimally in one
dimension using
dynamic programming
Related to
quantization problems.
Histogram types
Equal-width histograms:
It divides the range into N intervals of equal size
Equal-depth (frequency) partitioning:
It divides the range into N intervals, each containing
approximately same number of samples
V-optimal:
It considers all histogram types for a given number of
buckets and chooses the one with the least variance.
MaxDiff:
After sorting the data to be approximated, it defines the
borders of the buckets at points where the adjacent
values have the maximum difference
Example: split 1,1,4,5,5,7,9,14,16,18,27,30,30,32 to three
buckets MaxDiff 27-18 and 14-9
Histograms
Clustering
Partitions data set into clusters, and models it by
one representative from each cluster
Can be very effective if data is clustered but not
if data is smeared
There are many choices of clustering definitions
and clustering algorithms, further detailed in
Chapter 7
Cluster
Analysis
salary
the distance between points in the
same cluster should be small
the distance between points in different
clusters should be large
cluster
outlier
age
Hierarchical Reduction
Use multi-resolution structure with different
degrees of reduction
Hierarchical clustering is often performed but tends
to define partitions of data sets rather than
clusters
Parametric methods are usually not amenable to
hierarchical representation
Hierarchical aggregation
An index tree hierarchically divides a data set into
partitions by value range of some attributes
Each partition can be considered as a bucket
Thus an index tree with aggregates stored at each node is
a hierarchical histogram
Data Preprocessing
Why preprocess the data?
Data cleaning
Data integration and transformation
Data reduction
Discretization and concept hierarchy
generation
Summary
Discretization
Three types of attributes:
Nominal values from an unordered set
Ordinal values from an ordered set
Continuous real numbers
Discretization:
divide the range of a continuous attribute into
intervals
why?
Some classification algorithms only accept
categorical attributes.
Reduce data size by discretization
Prepare for further analysis
Discretization and Concept
hierachy
Discretization
reduce the number of values for a given continuous
attribute by dividing the range of the attribute into
intervals. Interval labels can then be used to replace
actual data values.
Concept hierarchies
reduce the data by collecting and replacing low level
concepts (such as numeric values for the attribute age)
by higher level concepts (such as young, middle-aged,
or senior).
Discretization and concept
hierarchy generation for
Binning/Smoothing (see sections before)
numeric data
Histogram analysis (see sections before)
Entropy-based discretization
Discretization
Given a set of samples S, if S is partitioned into two
intervals S1 and S2 using boundary T, the information
gain I(S,T) after partitioning is
| S1 | | S2 |
I (S , T ) Ent ( S 1) Ent ( S 2)
|S| |S|
The boundary that maximizes the information gain
over all possible boundaries is selected as a binary
discretization.
The process is recursively applied to partitions
obtained until some stopping criterion is met, e.g.,
Ent ( S ) I (T , S )
Experiments show that it may reduce data size and
improve classification accuracy
Segmentation by natural
partitioning
Users often like to see numerical ranges partitioned into
relatively uniform, easy-to-read intervals that appear intuitive
or natural. E.g., [50-60] better than [51.223-60.812]
The 3-4-5 rule can be used to segment numerical data into
relatively uniform, natural intervals.
* If an interval covers 3, 6, 7 or 9 distinct values at the most
significant digit, partition the range into 3 equiwidth intervals
for 3,6,9 or 2-3-2 for 7
* If it covers 2, 4, or 8 distinct values at the most significant
digit, partition the range into 4 equiwidth intervals
* If it covers 1, 5, or 10 distinct values at the most significant
digit, partition the range into 5 equiwidth intervals