Decision Tree Classification
Algorithm
Decision Tree Classification
Algorithm
Decision Tree is a Supervised learning technique that can be used for both classification and
Regression problems, but mostly it is preferred for solving Classification problems.
It is a tree-structured classifier, where internal nodes represent the features of a dataset,
branches represent the decision rules, and each leaf node represents the outcome.
The algorithm then performs a greedy search — goes over all input features and their unique
values, calculates information gain for every combination, and saves the best split feature and
threshold for every node.
Why Study Decision
Tree
Decision trees also provide the foundation for more advanced ensemble methods such as
bagging, random forests and gradient boosting.
Feature Importance (Ranks features by importance using Information Gain)
Decision Tree Classification
Algorithm
In a Decision tree, there are two nodes, which are the Decision Node and Leaf
Node.
Decision nodes are used to make any decision and have multiple
branches.
Leaf nodes are the output of those decisions and do not contain any further
branches.
A decision tree simply asks a question, and based on the answer Yes/No, it further split the
tree into subtrees.
Note: A decision tree can contain categorical data YES/NO) as well as numeric
data.
Decision Tree
Root Node— Terminologies
1. Root node is from where the decision tree starts.
2. It represents the entire dataset, which further gets divided into two or more homogeneous sets.
Leaf Node—
Leaf nodes are the final output node, and the tree cannot be segregated further after getting a leaf
node.
Splitting—
Splitting is the process of dividing the decision node/root node into sub-nodes according to the given
conditions.
Branch/Sub Tree— A tree formed by splitting the
tree.
Decision Tree Classification
Algorithm
How does the Decision Tree algorithm
In a decision tree, for predicting the classWork?
of the given data, the algorithm starts from the root
node of the tree. This algorithm compares the values of root attribute with the record (real
dataset) attribute and, based on the comparison, follows the branch and jumps to the next
node.
For the next node, the algorithm again compares the attribute value with the other sub-nodes
and move further. It continues the process until it reaches the leaf node of the tree.
The complete process can be better understood using the below
example:
Example
Suppose there is a candidate who has a job offer and wants to decide whether he should accept
the offer or Not. So, to solve this problem, the decision tree starts with the root node Salary
attribute by ASM. The root node splits further into the next decision node (distance from the
office) and one leaf node based on the corresponding labels. The next decision node further
gets split into one decision node Cab facility) and one leaf node. Finally, the decision node
splits into two leaf nodes Accepted offers and Declined offer).
Decision Tree
Decision Tree Creation
Algorithm
Step-1 Begin the tree with the root node, says S, which contains the complete
dataset.
Step-2 Find the best attribute in the dataset using Attribute Selection Measure
ASM.
Step-3 Divide the S into subsets that contains possible values for the best
attributes.
Step-4 Generate the decision tree node, which contains the best
attribute.
Step-5 Recursively make new decision trees using the subsets of the dataset created in step
3. Continue this process until a stage is reached where you cannot further classify the nodes
and called the final node as a leaf node.
How It Works
1. Splitting-Divides a dataset into subsets based on conditions.
2. Feature Selection— Identifies the best feature for splitting,
using Entropy and Information Gain
3. Stopping Criteria: Stops splitting
when
• Maximum tree depth is reached.
• Minimum samples per node are achieved.
• A pure leaf (all data of the same class) is formed.
Attribute Selection
Measures(ASM)
While implementing a Decision tree, the main issue arises that how to select the best attribute
for the root node and for sub-nodes.
So, to solve such problems there is a technique which is called as Attribute selection measure
or ASM.
By this measurement, we can easily select the best attribute for the nodes of the
tree.
There are two popular techniques for ASM, which
are:
• Information Gain
• Gini Index
Information Gain
It calculates how much information a feature provides us about a
Problem)class.
According to the value of information gain, we split the node and build the decision
tree.
A decision tree algorithm always tries to maximize the value of information gain, and a
node/attribute having the highest information gain is split first.
It can be calculated using the below
formula
Information Gain= Entropy(S Weighted Avg) * Entropy(each
feature)]
Entropy
Entropy—
1. Entropy is a metric to measure the impurity in a given attribute.
2. It specifies randomness in data.
Entropy Formula—
Entropy(s)= -P(yes)log2 P(yes)- P(no) log2 P(no)
Where,
• S Total number of samples
• P(yes)= probability of yes
• P(no)= probability of no