0% found this document useful (0 votes)
4 views

L6 Decision Tree Classifier

Uploaded by

Fahim Ahmed
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

L6 Decision Tree Classifier

Uploaded by

Fahim Ahmed
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 46

Decision Trees

Course Title: Machine Learning

Dept. of Computer Science


Faculty of Science and Technology

Lecturer No: Week No: Semester: Summer 2021-22


Lecturer: Dr. M M Manjurul Islam
Overview

 What is a Decision Tree

 Sample Decision Trees

 How to Construct a Decision Tree

 Problems with Decision Trees

 Summary
Classification: Definition

 Given a collection of records (training set )


 Each record contains a set of attributes, one of the attributes is the
class.

 Find a model for class attribute as a function of the values


of other attributes.
 Goal: previously unseen records should be assigned a
class as accurately as possible.
 A test set is used to determine the accuracy of the model.
Usually, the given data set is divided into training and test sets,
with training set used to build the model and test set used to
validate it.
Illustrating Classification Task

Tid Attrib1 Attrib2 Attrib3 Class Learning


1 Yes Large 125K No
algorithm
2 No Medium 100K No

3 No Small 70K No

4 Yes Medium 120K No


Induction
5 No Large 95K Yes

6 No Medium 60K No

7 Yes Large 220K No Learn


8 No Small 85K Yes Model
9 No Medium 75K No

10 No Small 90K Yes


Model
10

Training Set
Apply
Tid Attrib1 Attrib2 Attrib3 Class Model
11 No Small 55K ?

12 Yes Medium 80K ?

13 Yes Large 110K ? Deduction


14 No Small 95K ?

15 No Large 67K ?
10

Test Set
What is a Decision Tree?

 An inductive learning task


 Use particular facts to make more generalized
conclusions

 A predictive model based on a branching series


of Boolean tests
 These smaller Boolean tests are less complex than a
one-stage classifier

 Let’s look at a sample decision tree…


Predicting Commute Time

If we leave at
Leave At 10 AM and
10 AM 9 AM
there are no
8 AM cars stalled on
Stall? Accident? the road, what
No Yes Long will our
No Yes
commute time
Short Long Medium Long be?
Inductive Learning

 In this decision tree, we made a series of


Boolean decisions and followed the
corresponding branch
 Did we leave at 10 AM?
 Did a car stall on the road?
 Is there an accident on the road?

 By answering each of these yes/no questions,


we then came to a conclusion on how long our
commute might take
Decision Trees as Rules

 We did not have represent this tree graphically

 We could have represented as a set of rules.


However, this may be much harder to read…
Decision Tree as a Rule Set
How to Create a Decision Tree

 We first make a list of attributes that we can


measure
 These attributes (for now) must be discrete

 We then choose a target attribute that we want


to predict
 Then create an experience table that lists what we
have seen in the past
Sample Experience Table

Example Attributes Target


Hour Weather Accident Stall Commute
D1 8 AM Sunny No No Long
D2 8 AM Cloudy No Yes Long
D3 10 AM Sunny No No Short
D4 9 AM Rainy Yes No Long
D5 9 AM Sunny Yes Yes Long
D6 10 AM Sunny No No Short
D7 10 AM Cloudy No No Short
D8 9 AM Rainy No No Medium
D9 9 AM Sunny Yes No Long
D10 10 AM Cloudy Yes Yes Long
D11 10 AM Rainy No No Short
D12 8 AM Cloudy Yes No Long
D13 9 AM Sunny No No Medium
Example of a Decision Tree

Splitting Attributes
Tid Refund Marital Taxable
Status Income Cheat

1 Yes Single 125K No Refund


2 No Married 100K No Yes No
3 No Single 70K No
NO MarSt
4 Yes Married 120K No
Single, Divorced Married
5 No Divorced 95K Yes
6 No Married 60K No TaxInc NO
7 Yes Divorced 220K No < 80K > 80K
8 No Single 85K Yes
NO YES
9 No Married 75K No
10 No Single 90K Yes
10

Model: Decision Tree


Training Data
Another Example of Decision Tree
MarSt Single,
Married Divorced
Tid Refund Marital Taxable
Status Income Cheat NO Refund
Yes No
1 Yes Single 125K No
2 No Married 100K No
NO TaxInc
3 No Single 70K No
< 80K > 80K
4 Yes Married 120K No
5 No Divorced 95K Yes NO YES
6 No Married 60K No
7 Yes Divorced 220K No
8 No Single 85K Yes There could be more than one tree that
9 No Married 75K No fits the same data!
10 No Single 90K Yes
10
Decision Tree Classification Task

Tid Attrib1 Attrib2 Attrib3 Class


Tree
1 Yes Large 125K No Induction
2 No Medium 100K No algorithm
3 No Small 70K No

4 Yes Medium 120K No


Induction
5 No Large 95K Yes

6 No Medium 60K No

7 Yes Large 220K No Learn


8 No Small 85K Yes Model
9 No Medium 75K No

10 No Small 90K Yes


Model
10

Training Set
Apply Decision
Model Tree
Tid Attrib1 Attrib2 Attrib3 Class
11 No Small 55K ?

12 Yes Medium 80K ?

13 Yes Large 110K ?


Deduction
14 No Small 95K ?

15 No Large 67K ?
10

Test Set
Apply Model to Test Data
Test Data
Start from the root of tree. Refund Marital Taxable
Status Income Cheat

No Married 80K ?
Refund 10

Yes No

NO MarSt
Single, Divorced Married

TaxInc NO
< 80K > 80K

NO YES
Apply Model to Test Data
Test Data
Refund Marital Taxable
Status Income Cheat

No Married 80K ?
Refund 10

Yes No

NO MarSt
Single, Divorced Married

TaxInc NO
< 80K > 80K

NO YES
Apply Model to Test Data
Test Data
Refund Marital Taxable
Status Income Cheat

No Married 80K ?
Refund 10

Yes No

NO MarSt
Single, Divorced Married

TaxInc NO
< 80K > 80K

NO YES
Apply Model to Test Data
Test Data
Refund Marital Taxable
Status Income Cheat

No Married 80K ?
Refund 10

Yes No

NO MarSt
Single, Divorced Married

TaxInc NO
< 80K > 80K

NO YES
Apply Model to Test Data
Test Data
Refund Marital Taxable
Status Income Cheat

No Married 80K ?
Refund 10

Yes No

NO MarSt
Single, Divorced Married

TaxInc NO
< 80K > 80K

NO YES
Apply Model to Test Data
Test Data
Refund Marital Taxable
Status Income Cheat

No Married 80K ?
Refund 10

Yes No

NO MarSt
Single, Divorced Married Assign Cheat to “No”

TaxInc NO
< 80K > 80K

NO YES
Decision Tree Classification Task

Tid Attrib1 Attrib2 Attrib3 Class


Tree
1 Yes Large 125K No Induction
2 No Medium 100K No algorithm
3 No Small 70K No

4 Yes Medium 120K No


Induction
5 No Large 95K Yes

6 No Medium 60K No

7 Yes Large 220K No Learn


8 No Small 85K Yes Model
9 No Medium 75K No

10 No Small 90K Yes


Model
10

Training Set
Apply Decision
Tid Attrib1 Attrib2 Attrib3 Class
Model Tree
11 No Small 55K ?

12 Yes Medium 80K ?

13 Yes Large 110K ?


Deduction
14 No Small 95K ?

15 No Large 67K ?
10

Test Set
Choosing Attributes

 The previous experience decision table showed 4


attributes: hour, weather, accident and stall

 But the decision tree only showed 3 attributes:


hour, accident and stall
Choosing Attributes

 Methods for selecting attributes show that weather


is not a discriminating attribute
Choosing Attributes

 The basic structure of creating a decision tree is the


same for most decision tree algorithms

 The difference lies in how we select the attributes


for the tree

 We will focus on the ID3 algorithm developed by


Ross Quinlan
Decision Tree Algorithms
 The basic idea behind any decision tree
algorithm is as follows:
 Choose the best attribute(s) to split the
remaining instances and make that attribute a
decision node
 Repeat this process for recursively for each child
 Stop when:
 All the instances have the same target attribute value
 There are no more attributes
 There are no more instances
Identifying the Best Attributes
 Refer back to our original decision tree

Leave At

10 AM 9 AM
8 AM

Stall? Accident?
Long
No Yes No Yes
Short Long Medium Long

◼ How did we know to split on leave at


and then on stall and accident and not
weather?
ID3 Heuristic

 To determine the best attribute, we look at the ID3


heuristic

 ID3 splits attributes based on their entropy.

 Entropy is the measure of disinformation…


Entropy

 Entropy is minimized when all values of the


target attribute are the same.
 If we know that commute time will always be
short, then entropy = 0

 Entropy is maximized when there is an equal


chance of all values for the target attribute
(i.e. the result is random)
 If commute time = short in 3 instances, medium
in 3 instances and long in 3 instances, entropy is
maximized
Entropy

 Calculation of entropy
 Entropy(S) = ∑(i=1 to l)-|Si|/|S| * log2(|Si|/|S|)
 S = set of examples
 Si = subset of S with value vi under the target attribute
 l = size of the range of the target attribute
ID3

 ID3 splits on attributes with the lowest entropy


 We calculate the entropy for all values of an
attribute as the weighted sum of subset
entropies as follows:
 ∑(i = 1 to k) |Si|/|S| Entropy(Si), where k is the range of
the attribute we are testing
 We can also measure information gain (which is
inversely proportional to entropy) as follows:
 Entropy(S) - ∑(i = 1 to k) |Si|/|S| Entropy(Si)
ID3

 Given our commute time sample set, we can


calculate the entropy of each attribute at the
root node

Attribute Expected Entropy Information Gain

Hour 0.6511 0.768449

Weather 1.28884 0.130719

Accident 0.92307 0.496479

Stall 1.17071 0.248842


Pruning Trees

 There is another technique for reducing the


number of attributes used in a tree - pruning

 Two types of pruning:


 Pre-pruning (forward pruning)
 Post-pruning (backward pruning)
Prepruning
 In prepruning, we decide during the building
process when to stop adding attributes
(possibly based on their information gain)

 However, this may be problematic?


 Sometimes attributes individually do not
contribute much to a decision, but combined,
they may have a significant impact
Postpruning

 Postpruning waits until the full decision tree has


built and then prunes the attributes

 Two techniques:
 Subtree Replacement
 Subtree Raising
Subtree Replacement
 Entire subtree is replaced by a single leaf node

C 4 5

1 2 3
Subtree Replacement
 Node 6 replaced the subtree
 Generalizes tree a little more, but may increase accuracy

6 4 5
Subtree Raising
 Entire subtree is raised onto another node

C 4 5

1 2 3
Issues with ID3

 ID3 is not optimal


 Uses expected entropy reduction, not actual
reduction

 Must use discrete (or discretized) attributes


 What if we left for work at 9:30 AM?
 We could break down the attributes into smaller
values…
Issues with Decision Trees

 While decision trees classify quickly, the time for


building a tree may be higher than another type of
classifier

 Decision trees suffer from a problem of errors


propagating throughout a tree
 A very serious problem as the number of classes
increases
Error Propagation

 Since decision trees work by a series of local


decisions, what happens when one of these local
decisions is wrong?
 Every decision from that point on may be wrong
 We may never return to the correct path of the tree
Error Propagation Example
Issues with ID3
 If we broke down leave time to the minute, we might get
something like this:

8:02 AM 8:03 AM 9:05 AM 9:07 AM 9:09 AM 10:02 AM

Long Medium Short Long Long Short

Since entropy is very low for each branch, we have


n branches with n leaves. This would not be helpful
for predictive modeling.
Issues with ID3
 We can use a technique known as
discretization
 We choose cut points, such as 9AM for
splitting continuous attributes
 These cut points generally lie in a subset of
boundary points, such that a boundary point
is where two adjacent instances in a sorted
list have different target value attributes
Issues with ID3
 Consider the attribute commute time

8:00 (L), 8:02 (L), 8:07 (M), 9:00 (S), 9:20 (S), 9:25 (S), 10:00 (S), 10:02 (M)

When we split on these attributes, we


increase the entropy so we don’t have a
decision tree with the same number of
cut points as leaves
Decision Tree Induction

 Common Algorithms:
 CART
 ID3, C4.5
Summary

 Decision trees can be used to help predict the


future

 The trees are easy to understand

 Decision trees work more efficiently with discrete


attributes

 The trees may suffer from error propagation

You might also like