Decision Tree
Decision Tree
n X n y
Standard 1 Standard 2
Discrete variable: nominal and ordinal Numeric variable: continuous and ordinal
Continuous variable: continuous Nominal variable: nominal
* Nominal = Categorical
p 1
n X n y
X y Problem
Transformation Nominal Nominal Classification
into dummy variable
Nominal Continuous Regression
Continuous Nominal Classification
Continuous Continuous Regression
p 1
Training
Learn
Model Regression/Classification
Set Classifier
Test
Set
• Predicting commute time
If we leave at 10 AM and there are no cars stalled on the road,
what will our commute time be?
Stall? Accident?
No Yes Long
No Yes
TaxInc NO
Test Data < 80K > 80K
NO YES
Decision Tree
Assign Cheat to “No” originated from training data
• Greedy strategy
– Split the records based on an attribute test that optimizes certain
criterion.
• Issues
– Determine how to split the records
How to specify the attribute test condition?
How to determine the best split?
– Determine when to stop splitting
• Depends on attribute types
– Nominal
– Ordinal
– Continuous
• Depends on number of ways to split
– 2-way split
– Multi-way split
• Multi-way split
– Use as many partitions as distinct values
CarType
Family Luxury
Sports
• Binary split
– Divides values into two subsets
– Need to find optimal partitioning
CarType CarType
Small Large
Medium
• Binary split
– Divides values into two subsets.
– Need to find optimal partitioning.
Size Size
Size
Taxable Taxable
Income Income?
> 80K?
< 10K > 80K
Yes No
• Issues
– Determine how to split the records
How to specify the attribute test condition?
How to determine the best split?
– Determine when to stop splitting
Before Splitting
10 records of class 0
10 records of class 1
C0: 5 C0: 9
C1: 5 C1: 1
Non-homogeneous, Homogeneous,
High degree of impurity Low degree of impurity
A? B?
Yes No Yes No
M1 M2 M3 M4
M12 M34
Gain = M0 – M12 vs M0 – M34
• Gini Index for a given node t :
𝐺𝐼𝑁𝐼 𝑡 = 1 − 𝑝 𝑗 𝑡 2
𝑗
(NOTE: 𝑝 𝑗 𝑡 is the relative frequency of class 𝑗 at node 𝑡).
C1 0 C1 1 C1 2 C1 3
C2 6 C2 5 C2 4 C2 3
Gini=0.000 Gini=0.278 Gini=0.444 Gini=0.500
𝐺𝐼𝑁𝐼 𝑡 = 1 − 𝑝 𝑗 𝑡 2
𝑘
𝑛𝑖
𝐺𝐼𝑁𝐼𝑠𝑝𝑙𝑖𝑡 = 𝐺𝐼𝑁𝐼(𝑖)
𝑛
𝑖=1
B? Parent
C1 6
Yes No
C2 6
Node N1 Node N2 Gini = 0.500
Gini(N1) Gini(N2)
= 1 – (5/7)2 – (2/7)2 = 1 – (1/5)2 – (4/5)2
= 0.408 = 0.32
N1 N2
C1 5 1
Gini(Children)
C2 2 4 = 7/12 * 0.408 + 5/12 * 0.32
Gini=0.371 = 0.371
• For each distinct value, gather counts for each class in the
dataset
• Use the count matrix to make decisions
Two-way split
Multi-way split (find best partition of values)
• Use Binary Decisions based on one Tid Refund Marital
Status
Taxable
Income Cheat
value
1 Yes Single 125K No
• Several Choices for the splitting value 2 No Married 100K No
No 0 7 1 6 2 5 3 4 3 4 3 4 3 4 4 3 5 2 6 1 7 0
Gini 0.420 0.400 0.375 0.343 0.417 0.400 0.300 0.343 0.375 0.400 0.420
• Entropy at a given node 𝑡:
C1 0 C1 1 C1 2 C1 3
C2 6 C2 5 C2 4 C2 3
Error=0.000 Error=1/6 Error=2/6 Error=1/2
• For a two-class problem:
• Greedy strategy
– Split the records based on an attribute test that optimizes certain
criterion.
• Issues
– Determine how to split the records
How to specify the attribute test condition?
How to determine the best split?
– Determine when to stop splitting
• Stop expanding a node when all the records belong to the
same class
• Stop expanding a node when all the records have similar
attribute values
• Early termination (to be discussed later)
• Advantages:
– Inexpensive to construct
– Extremely fast at classifying unknown records
– Easy to interpret for small-sized trees
– Accuracy is comparable to other classification techniques for many
simple data sets
• Example: C4.5
– Simple depth-first construction
– Uses Information Gain
– Sorts continuous attributes at each node
• SLIQ (EDBT’96 — Mehta et al.)
– Builds an index for each attribute and only class list and the current
attribute list reside in memory
• SPRINT (VLDB’96 — J. Shafer et al.)
– Constructs an attribute list data structure
• PUBLIC (VLDB’98 — Rastogi & Shim)
– Integrates tree splitting and tree pruning: stop growing the tree
earlier
• RainForest (VLDB’98 — Gehrke, Ramakrishnan & Ganti)
– Builds an AVC-list (attribute, value, class label)
• BOAT (PODS’99 — Gehrke, Ganti, Ramakrishnan & Loh)
– Uses bootstrapping to create several small samples
• Underfitting and overfitting
• Classification error
• Missing values
• Underfitting: when model is too simple, both training and
test errors are large
Overfitting
Decision boundary is distorted by noise point
• Lack of data points in the lower half of the diagram makes it
difficult to predict correctly the class labels of that region
• Insufficient number of training records in the region causes
the decision tree to predict the test examples using other
training records that are irrelevant to the classification task
• Re-substitution errors: error on training (Σ𝑒(𝑡))
• Generalization errors: error on testing (Σ𝑒’(𝑡))
• Methods for estimating generalization errors:
– Optimistic approach: 𝑒’(𝑡) = 𝑒(𝑡)
– Pessimistic approach:
For each leaf node: 𝑒’(𝑡) = (𝑒(𝑡) + 0.5)
Total errors: 𝑒’(𝑇) = 𝑒(𝑇) + 𝑁 0.5 (𝑁: number of leaf nodes)
For a tree with 30 leaf nodes and 10 errors on training
(out of 1000 instances):
Training error = 10/1000 = 1%
Generalization error = (10 + 300.5)/1000 = 2.5%
– Reduced error pruning (REP):
uses validation data set to estimate generalization error
• Overfitting results in decision trees that are more complex
than necessary.
• Training error no longer provides a good estimate of how well
the tree will perform on previously unseen records.
Occam’s Razor
– Given two models of similar generalization errors, one should
prefer the simpler model over the more complex model
– For complex models, there is a greater chance that it was fitted
accidentally by errors in data
– Therefore, one should include model complexity when evaluating a
model
• Pre-Pruning (Early Stopping Rule)
– Stop the algorithm before it becomes a fully-grown tree
– Typical stopping conditions for a node:
Stop if all instances belong to the same class
Stop if all the attribute values are the same
– More restrictive conditions:
Stop if number of instances is less than some user-specified threshold
Stop if class distribution of instances are independent of the available
features (e.g., using 𝜒 2 test)
Stop if expanding the current node does not improve impurity measures
(e.g., Gini or information gain).
• Post-pruning
– Grow decision tree to its entirety
– Trim the nodes of the decision tree in a bottom-up fashion
– If generalization error improves after trimming, replace sub-tree by a
leaf node.
– Class label of leaf node is determined from majority class of
instances in the sub-tree
Training Error (Before splitting) = 10/30
A1 A4
A2 A3
C0: 9 C0: 4
C1: 3 C1: 4
• Missing values affect decision tree construction in three
different ways:
– Affects how impurity measures are computed
– Affects how to distribute instance with missing value to child nodes
– Affects how a test instance with missing value is classified
Tid Refund Marital Taxable Before Splitting:
Status Income Class Entropy(Parent)
= -0.3 log(0.3)-(0.7)log(0.7) = 0.8813
1 Yes Single 125K No
2 No Married 100K No Class Class
3 No Single 70K No = Yes = No
Refund=Yes 0 3
4 Yes Married 120K No
Refund=No 2 4
5 No Divorced 95K Yes
Refund=? 1 0
6 No Married 60K No
7 Yes Divorced 220K No
Split on Refund:
8 No Single 85K Yes Entropy(Refund=Yes) = 0
9 No Married 75K No Entropy(Refund=No)
10 ? Single 90K Yes = -(2/6)log(2/6) – (4/6)log(4/6) = 0.9183
10
Entropy(Children)
Missing = 0.3 (0) + 0.6 (0.9183) = 0.551
value
Gain = 0.9 (0.8813 – 0.551) = 0.3303
Tid Refund Marital Taxable Tid Refund Marital Taxable
Status Income Class Status Income Class
2 No Married 100K No
3 No Single 70K No Refund
4 Yes Married 120K No Yes No
5 No Divorced 95K Yes
Class=Yes 0 + 3/9 Class=Yes 2 + 6/9
6 No Married 60K No
Class=No 3 Class=No 4
7 Yes Divorced 220K No
8 No Single 85K Yes
Probability(Refund = Yes) = 3/9
9 No Married 75K No
Probability(Refund = No) = 6/9
10
Refund
Yes No Assign record to
the left child with weight = 3/9 and
Class=Yes 0 Cheat=Yes 2 the right child with weight = 6/9
Class=No 3 Cheat=No 4
New record: Married Single Divorced Total
Tid Refund Marital Taxable
Status Income Class Class=No 3 1 0 4
11 No ? 85K ?
10