0% found this document useful (0 votes)
15 views35 pages

Data Mining Slide

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1/ 35

Data Mining

What is Data Mining?

 Data Mining is:


(1) The efficient discovery of previously unknown,
valid, potentially useful, understandable
patterns in large datasets

(2) The analysis of (often large) observational


data sets to find unsuspected relationships and
to summarize the data in novel ways that are
both understandable and useful to the data
owner
Overview of terms

 Data: a set of facts (items) D, usually


stored in a database
 Pattern: an expression E in a language L,
that describes a subset of facts
 Attribute: a field in an item i in D.
 Interestingness: a function ID,L that maps
an expression E in L into a measure space
M
Overview of terms

 The Data Mining Task:

For a given dataset D, language of facts L,


interestingness function ID,L and threshold
c, find the expression E such that ID,L(E) > c
efficiently.
Knowledge Discovery
Examples of Large Datasets

 Government: IRS, NGA, …


 Large corporations
 WALMART: 20M transactions per day
 MOBIL: 100 TB geological databases
 AT&T 300 M calls per day
 Credit card companies

 Scientific
 NASA, EOS project: 50 GB per hour
 Environmental datasets
Examples of Data mining
Applications

1. Fraud detection: credit cards, phone cards


2. Marketing: customer targeting
3. Data Warehousing: Walmart
4. Astronomy
5. Molecular biology
How Data Mining is used

1. Identify the problem


2. Use data mining techniques to
transform the data into information
3. Act on the information
4. Measure the results
The Data Mining Process

1. Understand the domain


2. Create a dataset:
 Select the interesting attributes
 Data cleaning and preprocessing
3. Choose the data mining task and the
specific algorithm
4. Interpret the results, and possibly return
to 2
Origins of Data Mining

 Draws ideas from machine learning/AI,


pattern recognition, statistics, and
database systems
AI /
Statistics
 Must address: Machine Learning
 Enormity of data
 High dimensionality Data Mining
of data
 Heterogeneous,
distributed nature Database
of data systems
Data Mining Tasks

1. Classification: learning a function that


maps an item into one of a set of
predefined classes
2. Regression: learning a function that maps
an item to a real value
3. Clustering: identify a set of groups of
similar items
Data Mining Tasks

4. Dependencies and associations:


identify significant dependencies between
data attributes
5. Summarization: find a compact
description of the dataset or a subset of
the dataset
Data Mining Methods

1. Decision Tree Classifiers:


Used for modeling, classification
2. Association Rules:
Used to find associations between sets of
attributes
3. Sequential patterns:
Used to find temporal associations in time series
4. Hierarchical clustering:
used to group customers, web users, etc
Why Data Preprocessing?

 Data in the real world is dirty


 incomplete: lacking attribute values, lacking certain
attributes of interest, or containing only aggregate data
 noisy: containing errors or outliers
 inconsistent: containing discrepancies in codes or names
 No quality data, no quality mining results!
 Quality decisions must be based on quality data
 Data warehouse needs consistent integration of quality
data
 Required for both OLAP and Data Mining!
Why can Data be
Incomplete?

 Attributes of interest are not available (e.g.,


customer information for sales transaction data)
 Data were not considered important at the time
of transactions, so they were not recorded!
 Data not recorder because of misunderstanding
or malfunctions
 Data may have been recorded and later deleted!
 Missing/unknown values for some data
Data Cleaning
 Data cleaning tasks
 Fill in missing values
 Identify outliers and smooth out noisy data
 Correct inconsistent data
Classification: Definition

 Given a collection of records (training set )


 Each record contains a set of attributes, one of the attributes
is the class.
 Find a model for class attribute as a function
of the values of other attributes.
 Goal: previously unseen records should be
assigned a class as accurately as possible.
 A test set is used to determine the accuracy of the model.
Usually, the given data set is divided into training and test
sets, with training set used to build the model and test set
used to validate it.
Classification Example
cal cal us
i i o
gor gor i nu
a te a te ont ass
c c c cl
Tid Home Marital Taxable Home Marital Taxable
Owner Status Income Default Owner Status Income Default

1 Yes Single 125K No No Single 75K ?

2 No Married 100K No Yes Married 50K ?

3 No Single 70K No No Married 150K ?


4 Yes Married 120K No Yes Divorced 90K ?
5 No Divorced 95K Yes No Single 40K ?
6 No Married 60K No No Married 80K ? Test
10

Set
7 Yes Divorced 220K No
8 No Single 85K Yes
9 No Married 75K No
Training
Learn
Model
10
10 No Single 90K Yes Set Classifier
Example of a Decision Tree
cal cal us
i i o
or or nu
teg
teg
nti
ass
ca ca co cl
Tid Home Marital Taxable
Splitting Attributes
Owner Status Income Default

1 Yes Single 125K No


2 No Married 100K No HO
Yes No
3 No Single 70K No
4 Yes Married 120K No NO MarSt
5 No Divorced 95K Yes Single, Divorced Married
6 No Married 60K No
TaxInc NO
7 Yes Divorced 220K No
8 No Single 85K Yes
< 80K > 80K
9 No Married 75K No NO YES
10 No Single 90K Yes
10

Training Data Model: Decision Tree


Another Example of Decision
Tree
cal cal us
i i o
or or nu
teg
teg
nti
ass Single,
ca ca co cl MarSt
Married Divorced
Tid Home Marital Taxable
Owner Status Income Default
NO HO
1 Yes Single 125K No No
Yes
2 No Married 100K No
3 No Single 70K No NO TaxInc
4 Yes Married 120K No < 80K > 80K
5 No Divorced 95K Yes
NO YES
6 No Married 60K No
7 Yes Divorced 220K No
8 No Single 85K Yes
9 No Married 75K No There could be more than one tree that
10 No Single 90K Yes
fits the same data!
10
Classification: Application
1

 Direct Marketing

Goal: Reduce cost of mailing by targeting a set of consumers
likely to buy a new cell-phone product.

Approach:

Use the data for a similar product introduced before.

We know which customers decided to buy and which decided
otherwise. This {buy, don’t buy} decision forms the class
attribute.

Collect various demographic, lifestyle, and company-interaction
related information about all such customers.

Type of business, where they stay, how much they earn, etc.

Use this information as input attributes to learn a classifier
model.
From [Berry & Linoff] Data Mining Techniques, 1997
Classification: Application
2

 Fraud Detection
 Goal: Predict fraudulent cases in credit card transactions.
 Approach:

Use credit card transactions and the information on its account-
holder as attributes.
 When does a customer buy, what does he buy, how often he

pays on time, etc



Label past transactions as fraud or fair transactions. This forms
the class attribute.

Learn a model for the class of the transactions.

Use this model to detect fraud by observing credit card
transactions on an account.
Clustering Definition

 Given a set of data points, each having a


set of attributes, and a similarity measure
among them, find clusters such that
 Data points in one cluster are more similar to
one another.
 Data points in separate clusters are less similar
to one another.
 Similarity Measures:
 Euclidean Distance if attributes are continuous.
 Other Problem-specific Measures.
Illustrating Clustering
Euclidean Distance Based Clustering in 3-D space.

Intracluster
Intraclusterdistances
distances Intercluster
Interclusterdistances
distances
are
areminimized
minimized are
aremaximized
maximized
Clustering: Application 1

 Market Segmentation:
 Goal: subdivide a market into distinct subsets of
customers where any subset may conceivably be selected
as a market target to be reached with a distinct marketing
mix.
 Approach:

Collect different attributes of customers based on their
geographical and lifestyle related information.

Find clusters of similar customers.

Measure the clustering quality by observing buying patterns
of customers in same cluster vs. those from different clusters.
Clustering: Application 2

 Document Clustering:
 Goal: To find groups of documents that are
similar to each other based on the important
terms appearing in them.
 Approach: To identify frequently occurring terms
in each document. Form a similarity measure
based on the frequencies of different terms. Use
it to cluster.
 Gain: Information Retrieval can utilize the
clusters to relate a new document or search term
to clustered documents.
Illustrating Document
Clustering
 Clustering Points: 3204 Articles of Los Angeles Times.
 Similarity Measure: How many words are common in
these documents (after some word filtering).

Category Total Correctly


Articles Placed
Financial 555 364

Foreign 341 260

National 273 36

Metro 943 746

Sports 738 573

Entertainment 354 278


Association Rule
Discovery: Definition
 Given a set of records each of which contain some
number of items from a given collection;
 Produce dependency rules which will predict occurrence
of an item based on occurrences of other items.

TID Items
1 Bread, Coke, Milk
2 Beer, Bread
Rules
RulesDiscovered:
Discovered:
3 Beer, Coke, Diaper, Milk
{Milk}
{Milk}-->
-->{Coke}
{Coke}
4 Beer, Bread, Diaper, Milk
{Diaper,
{Diaper,Milk}
Milk}-->
-->{Beer}
{Beer}
5 Coke, Diaper, Milk
Association Rule Discovery:
Application 1

 Marketing and Sales Promotion:


 Let the rule discovered be
{Bagels, … } --> {Potato Chips}
 Potato Chips as consequent => Can be used to
determine what should be done to boost its sales.
 Bagels in the antecedent => Can be used to see which
products would be affected if the store discontinues
selling bagels.
 Bagels in antecedent and Potato chips in consequent =>
Can be used to see what products should be sold with
Bagels to promote sale of Potato chips!
Data Compression

Original Data Compressed


Data
lossless

os sy
l
Original Data
Approximated
Numerosity Reduction:
Reduce the volume of data

 Parametric methods
 Assume the data fits some model, estimate model
parameters, store only the parameters, and discard the
data (except possible outliers)

 Non-parametric methods
 Do not assume models
 Major families: histograms, clustering, sampling
Clustering

 Partitions data set into clusters, and models it by


one representative from each cluster
 Can be very effective if data is clustered but not
if data is “smeared”
 There are many choices of clustering definitions
and clustering algorithms, more later!
Sampling
 Allow a mining algorithm to run in complexity that
is potentially sub-linear to the size of the data
 Choose a representative subset of the data
 Simple random sampling may have very poor
performance in the presence of skew
 Develop adaptive sampling methods
 Stratified sampling:

Approximate the percentage of each class (or
subpopulation of interest) in the overall database

Used in conjunction with skewed data
 Sampling may not reduce database I/Os (page at a
time).
Sampling

W O R
SRS le random
i m p ho ut
( s e wi t
l
samp ment)
p l a ce
re

SRSW
R

Raw Data
Sampling
Raw Data Cluster/Stratified Sample

•The number of samples drawn from each


cluster/stratum is analogous to its size
•Thus, the samples represent better the data and
outliers are avoided

You might also like