Instructor: Junghye Lee
School of Management Engineering
junghyelee@unist.ac.kr
p 1
n X n y
Each independent variable Dependent variable
Standard 1 Standard 2
Discrete variable: nominal and ordinal Numeric variable: continuous and ordinal
Continuous variable: continuous Nominal variable: nominal
* Nominal = Categorical
p 1
n X n y
Each independent variable Dependent variable
X y Problem
Transformation Nominal Nominal Classification
into dummy variable
Nominal Continuous Regression
Continuous Nominal Classification
Continuous Continuous Regression
p 1
80%: training set
n X n y
20%: test set
Training
Learn
Model Regression/Classification
Set Classifier
Test
Set
• Predicting commute time
If we leave at 10 AM and there are no cars stalled on the road,
what will our commute time be?
Q1: What is a D.V.?
Leave At
Q2: How many I.V.s?
10 AM 9 AM Q3: What is the type of
8 AM each variable?
Stall? Accident?
No Yes Long
No Yes
Short Long Medium Long
Rule-based prediction model
Splitting Attributes
Tid Refund Marital Taxable
Status Income Cheat
1 Yes Single 125K No
Refund
2 No Married 100K No Yes No
3 No Single 70K No
4 Yes Married 120K No NO MarSt
Single, Married
5 No Divorced 95K Yes Divorced
6 No Married 60K No
TaxInc NO
7 Yes Divorced 220K No
< 80K > 80K
8 No Single 85K Yes
9 No Married 75K No NO YES
10 No Single 90K Yes
10
Training Data Decision Tree
Refund
Yes No
Refund Marital Taxable NO MarSt
Status Income Cheat Single, Married
Divorced
No Married 80K ?
10
TaxInc NO
Test Data < 80K > 80K
NO YES
Decision Tree
Assign Cheat to “No” originated from training data
• Greedy strategy
– Split the records based on an attribute test that optimizes certain
criterion.
• Issues
– Determine how to split the records
How to specify the attribute test condition?
How to determine the best split?
– Determine when to stop splitting
• Depends on attribute types
– Nominal
– Ordinal
– Continuous
• Depends on number of ways to split
– 2-way split
– Multi-way split
• Multi-way split
– Use as many partitions as distinct values
CarType
Family Luxury
Sports
• Binary split
– Divides values into two subsets
– Need to find optimal partitioning
CarType CarType
{Sports, Luxury} {Family} {Family, Luxury} {Sports}
• Multi-way split
– Use as many partitions as distinct values.
Size
Small Large
Medium
• Binary split
– Divides values into two subsets.
– Need to find optimal partitioning.
Size Size
{Medium, Large} {Small} {Small, Large} {Medium}
Size
{Small, Large} {Medium}
• Different ways of handling
– Discretization to form an ordinal categorical attribute
Static – discretizes once at the beginning (pre-processing)
– one attribute at a time (independently)
Dynamic – responds when the learner requires so, during the building of
the model
– interdependencies among attributes
– Binary Decision: (𝐴 < 𝑣) or (𝐴 ≥ 𝑣)
consider all possible splits and finds the best cut
can be more computation-intensive.
Taxable Taxable
Income Income?
> 80K?
< 10K > 80K
Yes No
[10K,25K) [25K,50K) [50K,80K)
(i) Binary split (ii) Multi-way split
• Greedy strategy
– Split the records based on an attribute test that optimizes certain
criterion.
• Issues
– Determine how to split the records
How to specify the attribute test condition?
How to determine the best split?
– Determine when to stop splitting
Before Splitting
10 records of class 0
10 records of class 1
Own Car Student
Car? Type? ID?
Yes No Family Luxury c1 c20
c10 c11
Sports
C0: 6 C0: 4 C0: 1 C0: 8 C0: 1 C0: 1 ... C0: 1 C0: 0 ... C0: 0
C1: 4 C1: 6 C1: 3 C1: 0 C1: 7 C1: 0 C1: 0 C1: 1 C1: 1
Which test condition is the best?
• Greedy approach:
– Nodes with homogeneous class distribution are preferred
• Need a measure of node impurity:
C0: 5 C0: 9
C1: 5 C1: 1
Non-homogeneous, Homogeneous,
High degree of impurity Low degree of impurity
• Measures of node impurity
– Gini Index
– Entropy
– Misclassification error
Before Splitting: C0 N00 M0
C1 N01
A? B?
Yes No Yes No
Node N1 Node N2 Node N3 Node N4
C0 N10 C0 N20 C0 N30 C0 N40
C1 N11 C1 N21 C1 N31 C1 N41
M1 M2 M3 M4
M12 M34
Gain = M0 – M12 vs M0 – M34
• Gini Index for a given node t :
𝐺𝐼𝑁𝐼 𝑡 = 1 − 𝑝 𝑗 𝑡 2
𝑗
(NOTE: 𝑝 𝑗 𝑡 is the relative frequency of class 𝑗 at node 𝑡).
– Maximum when records are equally distributed among all classes,
implying least interesting information
– Minimum when all records belong to one class, implying most
interesting information
C1 0 C1 1 C1 2 C1 3
C2 6 C2 5 C2 4 C2 3
Gini=0.000 Gini=0.278 Gini=0.444 Gini=0.500
𝐺𝐼𝑁𝐼 𝑡 = 1 − 𝑝 𝑗 𝑡 2
C1 0 P(C1) = 0/6 = 0 P(C2) = 6/6 = 1
C2 6 Gini = 1 – P(C1)2 – P(C2)2 = 1 – 0 – 1 = 0
C1 1 P(C1) = 1/6 P(C2) = 5/6
C2 5 Gini = 1 – (1/6)2 – (5/6)2 = 0.278
C1 2 P(C1) = 2/6 P(C2) = 4/6
C2 4 Gini = 1 – (2/6)2 – (4/6)2 = 0.444
• Used in CART, SLIQ, SPRINT.
• When a node 𝑝 is split into 𝑘 partitions (children), the quality
of split is computed as,
𝑘
𝑛𝑖
𝐺𝐼𝑁𝐼𝑠𝑝𝑙𝑖𝑡 = 𝐺𝐼𝑁𝐼(𝑖)
𝑛
𝑖=1
where, 𝑛𝑖 = number of records at child 𝑖, 𝑛 = number of
records at node 𝑝.
• Splits into two partitions
• Effect of weighing partitions:
– Larger and Purer Partitions are sought for.
B? Parent
C1 6
Yes No
C2 6
Node N1 Node N2 Gini = 0.500
Gini(N1) Gini(N2)
= 1 – (5/7)2 – (2/7)2 = 1 – (1/5)2 – (4/5)2
= 0.408 = 0.32
N1 N2
C1 5 1
Gini(Children)
C2 2 4 = 7/12 * 0.408 + 5/12 * 0.32
Gini=0.371 = 0.371
• For each distinct value, gather counts for each class in the
dataset
• Use the count matrix to make decisions
CarType CarType CarType
Family Sports Luxury {Sports, {Family,
{Family} {Sports}
Luxury} Luxury}
C1 1 2 1 C1 3 1 C1 2 2
C2 4 1 1 C2 2 4 C2 1 5
Gini 0.393 Gini 0.400 Gini 0.419
Two-way split
Multi-way split (find best partition of values)
• Use Binary Decisions based on one Tid Refund Marital
Status
Taxable
Income Cheat
value
1 Yes Single 125K No
• Several Choices for the splitting value 2 No Married 100K No
– Number of possible splitting values 3 No Single 70K No
= Number of distinct values 4 Yes Married 120K No
5 No Divorced 95K Yes
• Each splitting value has a count matrix 6 No Married 60K No
associated with it 7 Yes Divorced 220K No
– Class counts in each of the partitions, 8 No Single 85K Yes
𝐴 < 𝑣 and 𝐴 ≥ 𝑣 9 No Married 75K No
10 No Single 90K Yes
• Simple method to choose best 𝑣 10
– For each 𝑣, scan the database to gather Taxable
Income
count matrix and compute its Gini index > 80K?
– Computationally inefficient! Repetition
of work Yes No
• For efficient computation: for each attribute,
– Sort the attribute on values
– Linearly scan these values, each time updating the count matrix and
computing Gini index
– Choose the split position that has the least Gini index
Cheat No No No Yes Yes Yes No No No No
Taxable Income
Sorted Values 60 70 75 85 90 95 100 120 125 220
Split Positions 55 65 72 80 87 92 97 110 122 172 230
<= > <= > <= > <= > <= > <= > <= > <= > <= > <= > <= >
Yes 0 3 0 3 0 3 0 3 1 2 2 1 3 0 3 0 3 0 3 0 3 0
No 0 7 1 6 2 5 3 4 3 4 3 4 3 4 4 3 5 2 6 1 7 0
Gini 0.420 0.400 0.375 0.343 0.417 0.400 0.300 0.343 0.375 0.400 0.420
• Entropy at a given node 𝑡:
𝐸𝑛𝑡𝑟𝑜𝑝𝑦 𝑡 = − 𝑝 𝑗 𝑡 log 𝑝(𝑗|𝑡)
𝑗
(NOTE: 𝑝 𝑗 𝑡 is the relative frequency of class 𝑗 at node 𝑡).
– Measures homogeneity of a node.
Maximum when records are equally distributed among all classes
implying least information
Minimum when all records belong to one class, implying most
information
– Entropy based computations are similar to the Gini index
computations
• Information Gain:
𝑘
𝑛𝑖
𝐺𝐴𝐼𝑁𝑠𝑝𝑙𝑖𝑡 = 𝐸𝑛𝑡𝑟𝑜𝑝𝑦 𝑝 − 𝐸𝑛𝑡𝑟𝑜𝑝𝑦(𝑖)
𝑛
𝑖=1
Parent Node, 𝑝 is split into 𝑘 partitions;
𝑛𝑖 is number of records in partition 𝑖
– Measures Reduction in Entropy achieved because of the split. Choose
the split that achieves most reduction (maximizes GAIN)
– Used in ID3 and C4.5
– Disadvantage: Tends to prefer splits that result in large number of
partitions, each being small but pure.
• Gain Ratio:
𝐺𝑎𝑖𝑛𝑠𝑝𝑙𝑖𝑡
𝐺𝑎𝑖𝑛𝑅𝐴𝑇𝐼𝑂𝑠𝑝𝑙𝑖𝑡 =
𝑆𝑝𝑙𝑖𝑡𝐼𝑁𝐹𝑂𝑅
𝑘
𝑛𝑖 𝑛𝑖
𝑆𝑝𝑙𝑖𝑡𝐼𝑁𝐹𝑂 = − log
𝑛 𝑛
𝑖=1
Parent Node, 𝑝 is split into 𝑘 partitions
𝑛𝑖 is number of records in partition 𝑖
– Adjusts Information Gain by the entropy of the partitioning
(SplitINFO)
– Higher entropy partitioning (large number of small partitions) is
penalized!
– Used in C4.5
– Designed to overcome the disadvantage of Information Gain
• Classification error at a node 𝑡:
𝐸𝑟𝑟𝑜𝑟 𝑡 = 1 − max 𝑃(𝑖|𝑡)
𝑖
• Measures misclassification error made by a node
– Maximum when records are equally distributed among all classes,
implying least interesting information
– Minimum when all records belong to one class, implying most
interesting information
C1 0 C1 1 C1 2 C1 3
C2 6 C2 5 C2 4 C2 3
Error=0.000 Error=1/6 Error=2/6 Error=1/2
• For a two-class problem:
• Greedy strategy
– Split the records based on an attribute test that optimizes certain
criterion.
• Issues
– Determine how to split the records
How to specify the attribute test condition?
How to determine the best split?
– Determine when to stop splitting
• Stop expanding a node when all the records belong to the
same class
• Stop expanding a node when all the records have similar
attribute values
• Early termination (to be discussed later)
• Advantages:
– Inexpensive to construct
– Extremely fast at classifying unknown records
– Easy to interpret for small-sized trees
– Accuracy is comparable to other classification techniques for many
simple data sets
• Example: C4.5
– Simple depth-first construction
– Uses Information Gain
– Sorts continuous attributes at each node
• SLIQ (EDBT’96 — Mehta et al.)
– Builds an index for each attribute and only class list and the current
attribute list reside in memory
• SPRINT (VLDB’96 — J. Shafer et al.)
– Constructs an attribute list data structure
• PUBLIC (VLDB’98 — Rastogi & Shim)
– Integrates tree splitting and tree pruning: stop growing the tree
earlier
• RainForest (VLDB’98 — Gehrke, Ramakrishnan & Ganti)
– Builds an AVC-list (attribute, value, class label)
• BOAT (PODS’99 — Gehrke, Ganti, Ramakrishnan & Loh)
– Uses bootstrapping to create several small samples
• Underfitting and overfitting
• Classification error
• Missing values
• Underfitting: when model is too simple, both training and
test errors are large
Overfitting
Decision boundary is distorted by noise point
• Lack of data points in the lower half of the diagram makes it
difficult to predict correctly the class labels of that region
• Insufficient number of training records in the region causes
the decision tree to predict the test examples using other
training records that are irrelevant to the classification task
• Re-substitution errors: error on training (Σ𝑒(𝑡))
• Generalization errors: error on testing (Σ𝑒’(𝑡))
• Methods for estimating generalization errors:
– Optimistic approach: 𝑒’(𝑡) = 𝑒(𝑡)
– Pessimistic approach:
For each leaf node: 𝑒’(𝑡) = (𝑒(𝑡) + 0.5)
Total errors: 𝑒’(𝑇) = 𝑒(𝑇) + 𝑁 0.5 (𝑁: number of leaf nodes)
For a tree with 30 leaf nodes and 10 errors on training
(out of 1000 instances):
Training error = 10/1000 = 1%
Generalization error = (10 + 300.5)/1000 = 2.5%
– Reduced error pruning (REP):
uses validation data set to estimate generalization error
• Overfitting results in decision trees that are more complex
than necessary.
• Training error no longer provides a good estimate of how well
the tree will perform on previously unseen records.
Occam’s Razor
– Given two models of similar generalization errors, one should
prefer the simpler model over the more complex model
– For complex models, there is a greater chance that it was fitted
accidentally by errors in data
– Therefore, one should include model complexity when evaluating a
model
• Pre-Pruning (Early Stopping Rule)
– Stop the algorithm before it becomes a fully-grown tree
– Typical stopping conditions for a node:
Stop if all instances belong to the same class
Stop if all the attribute values are the same
– More restrictive conditions:
Stop if number of instances is less than some user-specified threshold
Stop if class distribution of instances are independent of the available
features (e.g., using 𝜒 2 test)
Stop if expanding the current node does not improve impurity measures
(e.g., Gini or information gain).
• Post-pruning
– Grow decision tree to its entirety
– Trim the nodes of the decision tree in a bottom-up fashion
– If generalization error improves after trimming, replace sub-tree by a
leaf node.
– Class label of leaf node is determined from majority class of
instances in the sub-tree
Training Error (Before splitting) = 10/30
Class = Yes 20 Pessimistic error = (10 + 0.5)/30 = 10.5/30
Class = No 10 Training Error (After splitting) = 9/30
Error = 10/30 Pessimistic error (After splitting)
= (9 + 4 0.5)/30 = 11/30
A?
PRUNE!
A1 A4
A2 A3
Class = Yes 8 Class = Yes 3 Class = Yes 4 Class = Yes 5
Class = No 4 Class = No 4 Class = No 1 Class = No 1
• Optimistic error? C0 13
C1 7
Don’t prune for both cases Error = 7/20
• Pessimistic error? Case 1:
Don’t prune case 1, prune case 2
C0: 11 C0: 2
• Reduced error pruning? C1: 3 C1: 4
Depends on validation set Case 2:
C0: 9 C0: 4
C1: 3 C1: 4
• Missing values affect decision tree construction in three
different ways:
– Affects how impurity measures are computed
– Affects how to distribute instance with missing value to child nodes
– Affects how a test instance with missing value is classified
Tid Refund Marital Taxable Before Splitting:
Status Income Class Entropy(Parent)
= -0.3 log(0.3)-(0.7)log(0.7) = 0.8813
1 Yes Single 125K No
2 No Married 100K No Class Class
3 No Single 70K No = Yes = No
Refund=Yes 0 3
4 Yes Married 120K No
Refund=No 2 4
5 No Divorced 95K Yes
Refund=? 1 0
6 No Married 60K No
7 Yes Divorced 220K No
Split on Refund:
8 No Single 85K Yes Entropy(Refund=Yes) = 0
9 No Married 75K No Entropy(Refund=No)
10 ? Single 90K Yes = -(2/6)log(2/6) – (4/6)log(4/6) = 0.9183
10
Entropy(Children)
Missing = 0.3 (0) + 0.6 (0.9183) = 0.551
value
Gain = 0.9 (0.8813 – 0.551) = 0.3303
Tid Refund Marital Taxable Tid Refund Marital Taxable
Status Income Class Status Income Class
1 Yes Single 125K No 10 ? Single 90K Yes
10
2 No Married 100K No
3 No Single 70K No Refund
4 Yes Married 120K No Yes No
5 No Divorced 95K Yes
Class=Yes 0 + 3/9 Class=Yes 2 + 6/9
6 No Married 60K No
Class=No 3 Class=No 4
7 Yes Divorced 220K No
8 No Single 85K Yes
Probability(Refund = Yes) = 3/9
9 No Married 75K No
Probability(Refund = No) = 6/9
10
Refund
Yes No Assign record to
the left child with weight = 3/9 and
Class=Yes 0 Cheat=Yes 2 the right child with weight = 6/9
Class=No 3 Cheat=No 4
New record: Married Single Divorced Total
Tid Refund Marital Taxable
Status Income Class Class=No 3 1 0 4
11 No ? 85K ?
10
Class=Yes 6/9 1 1 2.67
Total 3.67 2 1 6.67
Refund
Yes
No
NO MarSt
Single,
Married
Divorced
TaxInc NO
< 80K > 80K
Probability(Marital Status = Married) = 3.67/6.67
NO YES
Probability(Marital Status = {Single,Divorced}) = 3/6.67