DM Unit I.1-Introduction
DM Unit I.1-Introduction
DM Unit I.1-Introduction
UNIT I
◦ Introduction
Basics of Data Mining
Related Concepts
Data Mining Techniques
2
Basics of DM
3
Goal: Provide an overview of data mining.
Define data mining
Data mining vs. databases
Basic data mining tasks
Data mining development
Data mining issues
4
Data is growing at a phenomenal rate
Users expect more sophisticated information
How?
5
Finding hidden information in a database
Fit data to a model
Similar terms
◦ Exploratory data analysis
◦ Data driven discovery
◦ Deductive learning
6
Query Query
◦ Well defined ◦ Poorly defined
◦ SQL ◦ No precise query language
Data Data
◦ Operational data ◦ Not operational data
Output Output
o Precise o Fuzzy
o Subset of database o Not a subset of database
Objective: Fit Data to a Model
◦ Descriptive
◦ Predictive
Preference – Technique to choose the best model
Search – Technique to search the data
◦ “Query”
8
Database
• Find all credit applicants with last name of Smith.
• Identify customers who have purchased more
than $10,000 in the last month.
• Find all customers who have purchased milk
Data Mining
– Find all credit applicants who are poor credit
risks. (classification)
– Identify customers with similar buying habits.
(Clustering)
– Find all items which are frequently purchased
with milk. (association rules)
9
10
Classification maps data into predefined
groups or classes
◦ Supervised learning
Classes are determined before examining data.
Classes r defined based on data attributes values
E.g. identifying credit risk
◦ Pattern recognition
E.g. airport security screening station
11
Time series analysis is examined as it varies over
time.
◦ Values are usually obtained as evenly spaced time
points (day,week, month..)
◦ 3 basic functions :-
Distance measure : similarity between different time
series.
Structure of line : its behaviour.
Historical time series :predict future values.
◦ E.g. investing in stocks
12
Prediction is predicting a future state rather
than a current state.
◦ can b viewed as type of classification.
◦ E.g. flodding
13
Summarization maps data into subsets with
associated simple descriptions.
◦ Characterization
◦ Generalization
Association Rules /Link Analysis uncovers
relationships among data.
◦ Association Rules
◦ Is a model that identifies specific type of data association.
◦ E.g. super market basket analysis.
◦ Assist retail store management in advertising, marketing and
inventory control.
Sequence discovery/sequential analysis is
used to determine sequential patterns in data.
◦ Based on time sequence of actions.
◦ Items are purchased over time in some order rather than at same
time like in super market basket analysis.
◦ E.g. a person buying CD player might purchase CD’s in a week
time .
14
Knowledge Discovery in Databases
(KDD): process of finding useful information and
patterns in data.
Data Mining: Use of algorithms to extract the
15
Modified from [FPSS96C]
•Bayes Theorem
•Regression Analysis
•EM Algorithm
•K-Means Clustering
•Time Series Analysis
19
Human interaction
Overfitting
Outliners
Interpretation of results
Visualization of results
Large datasets
High dimensionality
Multimedia data
Missing data
Irrelevant data
Noisy data
Changing data
Integration
Application
20
Human Interaction
◦ Problem is not precisely stated
◦ Need technical n domain experts to interpret the results.
Overfitting
◦ Occurs when model does not fit future needs
◦ Caused by assumptions about data or by small size of
database.
◦ E.g. classification of employee db to classify employee as
short, medium n tall.
Outliers
◦ Data entries that donot fit in the derived model .
21
Interpretation of Result
◦ Requires expert to correctly interpret the result.
Visualization
◦ Easily view n understand the o/p
Large Datasets
◦ Massive dataset creates problem when applying algo
designed for small db.
High Dimensionality (dimensionality curse)
◦ Not all attributes r needed for DM problem
◦ These attributes might interfere with correct completion
of DM task
◦ Solution :- dimensionality reduction
22
Multimedia Data
◦ Most DM algo r targeted to traditional data types
Missing Data
◦ During Preprocessing step of KDD ,missing data r replaced
with estimates.
◦ Leads to invalid results
Irrelevant Data
◦ Some unwanted attributes
Noisy Data
◦ Attributes might b invalid or incorrect.
Changing Data
◦ Db not static.
◦ DM algo assumes static DB.
◦ Algo shud re-run when database changes.
23
Integration
◦ KDD process is not integrated with normal data
processing .
◦ KDD request may b unusual , special ,one time need.
◦ Goal :- integration of DM functions with traditional DB.
Application
◦ Challenge :- intented use of information from DM
function
24
Privacy
Profiling
Unauthorized use
26
Related Concepts
28
Goal: Examine some areas which are
related to data mining.
Database/OLTP Systems
Fuzzy Sets and Fuzzy Logic
Information Retrieval
DSS
Dimensional Modeling
Data Warehousing
OLAP
Web Search Engines
Statistics
Machine Learning
Pattern Matching
29
What is Database
Schema
◦ (ID,Name,Address,Salary,JobNo)
Data Model
◦ ER
◦ Relational
Query:
SELECT Name
FROM T
WHERE Salary > 100000
30
A set is collection of objects.
Fuzzy Set: is a set , F, in which Set membership function
f is a real valued function with output in the range [0,1].
◦ f(x): Probability x is in F.
◦ 1-f(x): Probability x is not in F.
◦ This is not a true probability, but rather the degree of truth
associated with the statement that x is in the set.
◦ With traditional db , the set membership function is boolean.
◦ E.g.
◦ T = {x | x is a person and x is tall}
◦ Let f(x) be the probability that x is tall
◦ Here f is the membership function n is not boolean.
◦ The result for this query is fuzzy.
◦ E.g searching on web is also fuzzy.
33
Accept
Loan accept
Amnt
reject reject
Income Fuzzy
Simple
34
Information Retrieval (IR): retrieving desired
information from textual data.
◦ Library Science
◦ Digital Libraries
◦ Web Search Engines
Traditionally keyword based
Sample query:
Find all documents about “data mining”.
◦ This is an classification task coz set of documents in library is
divided into classes based on keywords.
The retrieval of document is based on calculation of a
similarity measures
DM: Similarity measures;
Mine text/Web data.
35
Similarity: measure of how close a query is to a
document.
Documents which are “close enough” are retrieved.
D={D1,….Dn} (set of documents)
The input is a query q , list of keywords.
Similarity between query and document is calculated
as sim(q,Di).
Metrics:
◦ Precision = |Relevant and Retrieved|
|Retrieved|
◦ Recall = |Relevant and Retrieved|
|Relevant|
36
Precision : “are all documents retrieved ones that I
am interested in?”
Recall : “have all relevant documents been
retrieved?”
2 of them represents desired result:
◦ relevant retrieved
◦ Not relevant not retrieved
2 of them represents error situations:
◦ Relevant not retrieved
◦ Not relevant retrieved
Concept hierarchies
37
DSS are comprehensive computer system n
related tools that assist managers in making
decision and solving problems.
Improves decision making
EIS n ESS have evolved
38
Dimensional modeling is a different way to
view n interrogate data in a database.
Useful in decision support systems and mining
Decision support application require that
information b obtained along many
dimension.
E.g. sales manager report.
Dimension: collection of logically related
attributes; axis for modeling data.
Facts: data stored usually in numeric
E.g.: Dimensions – products, locations, date
Facts – quantity, unit price
DSS queries may access the facts from many
different dimension n levels.
DM: May view data as dimensional.
39
ProdID LocID Date Quantity UnitPrice
123 Dallas 022900 5 25
123 Houston 020100 10 20
150 Dallas 031500 1 100
150 Dallas 031500 5 95
150 Fort 021000 5 80
Worth
150 Chicago 012000 20 75
200 Seattle 030100 5 50
300 Rochester 021500 200 5
500 Bradenton 022000 15 20
500 Chicago 012000 10 25
1
40
Roll Up: more general dimension
Drill Down: more specific dimension
See cube diag. we have 8*7*5=280
Dimension (Aggregation) Hierarchy
◦ Use < to represent order relationship.
When levels for a dimension satisfy this structure, the facts along
these dimension are said to b additive.
◦ Product< type< company.
◦ When this order relationship is satisfied between two levels,
there is an aggregate type of relationship among the facts.
E.g. Quantity of product sold of a particular type is the sum of
the quantities for all products within that type.
◦ Min, max, sum, avg…
41
42
43
Multidimensional schemas.
Fact table (major table)
Dimension table (minor
table)
E.g access all location in
Dallas
SELECT quantity, price
FROM Facts, Location
WHERE
(Facts.LocationID=Location.
LocationID)
AND Fact table
(Location.City=“Dallas”)
The primary key for fact
table is collection of
foreign key from all
dimension tables. 44
“Subject-oriented, integrated, time-variant, nonvolatile” William
Inman
Corporate wide data (current /historical ) are merged into a single
repository.
Operational Data: Data used in day to day needs of company.
(traditional database)
◦ Billing, inventory, payroll..)
Informational Data: Supports other functions such as
planning and forecasting. (data warehouse).
E.g. Database of company.
◦ Motive : to find reason for too many employee leaving company in
recent time
View of data warehouse ( refer diag from txtbk)
◦ Since warehouse is stored as a database, it can b accessed by
traditional query
Data mining tools often access data warehouses rather than
operational data.
45
Operational Data Data Warehouse
Application OLTP OLAP
Use Precise Queries Ad Hoc
Temporal Snapshot Historical
Modification Dynamic Static
Orientation Application Business
Data Operational Values Integrated
Size Gigabits Terabits
Level Detailed Summarized
Access Often Less Often
Response Few Seconds Minutes
Data Schema Relational Star/Snowflake
46
Several ways to improve performance of data
warehouse:
◦ Summarization
◦ Denormalization
◦ partitioning
47
Online Analytic Processing (OLAP): provides more complex queries
than OLTP.
OnLine Transaction Processing (OLTP): traditional
database/transaction processing.
OLAP application involves analysis of actual data.
OLAP tools may b used in DSS.
Goal of OLAP is to support ad hoc querying needed to support DSS.
OLAP is an application view, not a data structure or schema.
OLAP requires multidimensional view of data (cube view).
Types of OLAP operations:
◦ Slice: examine sub-cube; one dimension
◦ Dice: rotate cube to look at another dimension; two or more dimension
◦ Roll Up : dimension reduction, aggregation
Instead of looking at onr single fact, look at all fact.
E.g. Overall sales of company
◦ Drill Down :
More detail fact information
E.g. quantities sold within a specific area of each cities.
◦ Visualization
Drill Down
49
Simple descriptive models
Statistical inference: generalizing a model
created from a sample of the data to the entire
dataset.
Exploratory Data Analysis:
◦ Data can actually drive the creation of the model
◦ Opposite of traditional statistical view.
Data mining targeted to business user
50
Machine Learning: area of AI that examines how to write
programs that can learn.
Often used in classification and prediction.
◦ Computer makes a prediction & based on feedback as to whether it is
correct “learn” from this feedback.
◦ When similar situation arise in future, this feedback can b used to make
same prediction or completely different prediction.
Supervised Learning: learns by example.
Unsupervised Learning: learns without knowledge of correct
answers.
Machine learning often deals with small static datasets.
Applications:
◦ Speech recognition
◦ Training moving robots
◦ Classification of astronomical structures
◦ Game playing
DM: Uses many machine learning techniques.
51
Pattern Matching/Pattern Recognition: finds
occurrences of a predefined pattern in the data.
Applications :
◦ speech recognition
◦ information retrieval & web search engines
◦ time series analysis (examining patterns of behavior in data
from different time series.
52
Data Mining Techniques
54
Goal: Provide an overview of basic data
mining techniques
Statistical
◦ Point Estimation
◦ Models Based on Summarization
◦ Bayes Theorem
◦ Hypothesis Testing
◦ Regression and Correlation
Similarity Measures
Decision Trees
Neural Networks
◦ Activation Functions
Genetic Algorithms
55
Point Estimate: estimate a population
parameter.
May be made by calculating the parameter for a
sample.
May be used to predict value for missing data.
Ex:
◦ R contains 100 employees
◦ 99 have salary information
◦ Mean salary of these is $50,000
◦ Use $50,000 as value of remaining employee’s salary.
56
In statistics, an estimator is a function of the
observable sample data that is used to estimate an
unknown population parameter
an estimate is the result from the actual
57
Bias: Difference between expected value and
actual value.
Why square?
Root Mean Square Error (RMSE)
58
Maximum likelihood estimation (MLE) is
a popular statistical method used to make
inferences about parameters of the underlying
probability distribution from a given data set
60
Obtain parameter estimates that maximize
the probability that the sample data occurs
for the specific model.
Joint probability for observing the sample
Maximize L.
61
Coin toss five times: {H,H,H,H,T}
Assuming a perfect coin with H and T equally
likely, the likelihood of this sequence is:
62
General likelihood formula:
63
In descriptive statistics, a box plot (also known as
a box-and-whisker diagram or plot or
candlestick chart) is a convenient way of
graphically depicting the five-number summary,
which consists of the smallest observation, lower
quartile (Q1), median, upper quartile (Q3), and
largest observation;
67
Visualization: Frequency distribution, mean, variance,
median, mode, etc.
Box Plot:
68
69
The probability of an event A conditional on
another event B is generally different from the
probability of B conditional on A. However, there
is a definite relationship between the two, and
Bayes' theorem is the statement of that
relationship.
70
Posterior Probability: P(h1|xi)
Prior Probability: P(h )
1
Bayes Theorem:
71
Credit authorizations (hypotheses):
h1=authorize purchase, h2 = authorize after
further identification, h3=do not authorize,
h4= do not authorize but contact police
Assign twelve data values for all combinations
of credit and income: 1 2 3 4
Excellent x1 x2 x3 x4
Good x5 x6 x7 x8
Bad x9 x10 x11 x12
72
Training Data:
ID Income Credit Class xi
1 4 Excellent h1 x4
2 3 Good h1 x7
3 2 Excellent h1 x2
4 3 Good h1 x7
5 4 Good h1 x8
6 2 Excellent h1 x2
7 3 Bad h2 x11
8 2 Bad h2 x10
9 3 Bad h3 x11
10 1 Bad h4 x9
73
Calculate P(xi|hj) and P(xi)
Ex: P(x7|h1)=2/6; P(x4|h1)=1/6; P(x2|h1)=2/6;
P(x8|h1)=1/6; P(xi|h1)=0 for all other xi.
Predict the class for x :
4
◦ Calculate P(hj|x4) for all hj.
◦ Place x4 in class with largest value.
◦ Ex:
P(h1|x4)=(P(x4|h1)(P(h1))/P(x4)
=(1/6)(0.6)/0.1=1.
x4 in class h1.
74
regression analysis is the process used to
estimate the parameter values of a function, in
which the function predicts the value of a response
variable in terms of the values of other variables.
77
Predict future values based on past values
Linear Regression assumes linear relationship
exists.
y = c 0 + c1 x 1 + … + c n x n
Find values to best fit the data
78
79
correlation, also called correlation
coefficient, indicates the strength and direction
of a linear relationship between two random
variables.
80
Examine the degree to which the values for two
variables behave similarly.
Correlation coefficient r:
• 1 = perfect correlation
• -1 = perfect but opposite correlation
• 0 = no correlation
81
Determine similarity between two objects.
Similarity characteristics:
82
83
Measure dissimilarity between objects
84
85
Decision Tree (DT):
◦ Tree where the root and each internal node is labeled with
a question.
◦ The arcs represent each possible answer to the associated
question.
◦ Each leaf node represents a prediction of a solution to the
problem.
Popular technique for classification; Leaf node
indicates class to which the corresponding tuple
belongs.
86
87
A Decision Tree Model is a computational
model consisting of three parts:
◦ Decision Tree
◦ Algorithm to create the tree
◦ Algorithm that applies the tree to data
Creation of the tree is the most difficult part.
Processing is basically a search similar to that
88
89
Advantages:
◦ Easy to understand.
◦ Easy to generate rules
Disadvantages:
◦ May suffer from overfitting.
◦ Classifies by rectangular partitioning.
◦ Does not easily handle nonnumeric data.
◦ Can be quite large – pruning is necessary.
90
Based on observed functioning of human brain.
(Artificial Neural Networks (ANN)
Our view of neural networks is very simplistic.
We view a neural network (NN) from a graphical
viewpoint.
Used in pattern recognition, speech recognition,
computer vision, and classification.
91
Neural Network (NN) is a directed graph
F=<V,A> with vertices V={1,2,…,n} and arcs
A={<i,j>|1<=i,j<=n}, with the following
restrictions:
◦ V is partitioned into a set of input nodes, VI, hidden
nodes, VH, and output nodes, VO.
◦ The vertices are also partitioned into layers
◦ Any arc <i,j> must have node i in layer h-1 and node j
in layer h.
◦ Arc <i,j> is labeled with a numeric value w ij.
◦ Node i is labeled with a function fi.
92
93
Functions associated with nodes in graph.
Output may be in range [-1,1] or [0,1]
95
96
A Neural Network Model is a computational
model consisting of three parts:
◦ Neural Network graph
◦ Learning algorithm that indicates how learning takes
place.
◦ Recall techniques that determine how information is
obtained from the network.
98
Learning
Can continue learning even after training set has
been applied.
99
Difficult to understand
May suffer from overfitting
Input values must be numeric.
Verification difficult.
10
0
Optimization search type algorithms.
Creates an initial feasible solution and iteratively creates
new “better” solutions.
Based on human evolution and survival of the fittest.
Chromosomes , which r DNA strings, provide the abstract
model for a living organism.
Subsections of the chromosomes which r called as genes, r
used to define different traits of the individual.
Must represent a solution as an individual.
Individual: string I=I1,I2,…,In where Ij is in given
alphabet A.
Each character Ij is called a gene.
Population: set of individuals.
10
1
000 000 000 111 000 000 00 000 111 00
10
2
A Genetic Algorithm (GA) is a
computational model consisting of five parts:
◦ A starting set of individuals, P.
◦ Crossover: technique to combine two parents to
create offspring.
◦ Mutation: randomly change an individual.
◦ Fitness: determine the best individuals.
◦ Algorithm which applies the crossover and
mutation techniques to P iteratively using the fitness
function to determine the best individuals in P to
keep.
10
3
10
4
Advantages
◦ Easily parallelized
Disadvantages
◦ Difficult to understand and explain to end users.
◦ Abstraction of the problem and method to represent
individuals is quite difficult.
◦ Determining fitness function is difficult.
◦ Determining how to perform crossover and mutation is
difficult.
10
5