DM Mod4
DM Mod4
DM Mod4
MODULE-4
Classification: Basic Concepts, Decision Trees, and Model Evaluation.
Classification: Definition
✓ Classification, which is the task of assigning objects to one of several predefined
categories.
✓ Given a collection of records (training set ),Each record contains a set of attributes, one
of the attributes is the class.
✓ Find a model for class attribute as a function of the values of other attributes.The
input
data for a classification task is a collection of records. Each record,
Page 1
DATA DATA MINING AND DATA WAREHOUSING (15CS651) VI SEM CSE
Page 2
DATA DATA MINING AND DATA WAREHOUSING (15CS651) VI SEM CSE
Most classification algorithms seek models that attain the highest accuracy, or equivalently, the
lowest error rate when applied to the test set.
Classification Techniques:
• Decision Tree based Methods
• Rule-based Methods
• Memory based reasoning
• Neural Networks
• Naïve Bayes and Bayesian Belief Networks
• Support Vector Machines
Page 3
DATA DATA MINING AND DATA WAREHOUSING (15CS651) VI SEM CSE
Decision Tree
The tree has three types of nodes:
A root node that has no incoming edges and zero or more outgoing edges.
Internal nodes, each of which has exactly one incoming edge and two or more outgoing edges.
Leaf or terminal nodes, each of which has exactly one incoming edge and no outgoing edges.
In a decision tree, each leaf node is assigned a class label. The non terminal nodes, which include
the root and other internal nodes, contain attribute test conditions to separate records that have
different characteristics
In principle, there are exponentially many decision trees that can be constructed from a given set
of attributes.
Page 4
DATA DATA MINING AND DATA WAREHOUSING (15CS651) VI SEM CSE
Hunt's Algorithm
In Hunt's algorithm, a decision tree is grown in a recursive fashion by partitioning the training
records into successively purer subsets. Let Di be the set of training records that are associated
with node t and y= {“y1,y2,y3….yc"} be the class labels. The following is a recursive definition
of Hunt's algorithm.
Step 1: If all the records in Dt belong to the same class yt, then t is a leaf node labeled as yt.
Step 2: If Di contains records that belong to more than one class, an attribute test condition is
selected to partition the records into smaller subsets. A child node is created for each outcome of
the test condition and the records in Dt are distributed to the children based on the outcomes. The
algorithm is then recursively applied to each child node.
Page 5
DATA DATA MINING AND DATA WAREHOUSING (15CS651) VI SEM CSE
Page 6
DATA DATA MINING AND DATA WAREHOUSING (15CS651) VI SEM CSE
To illustrate how the algorithm works, consider the problem of predicting whether a loan
applicant will repay her loan obligations or become delinquent, subsequently defaulting on her
loan.
The initial tree for the classification problem contains a single node with class label Defaulted =
No (see Figure 4.7a), which means that most of the borrowers successfully repaid their loans.
The tree, however, needs to be refined since the root node contains records from both classes.
The records are subsequently divided into smaller subsets based on the outcomes of the Home
Owner test condition as shown in Figure 4.7(b). The justification for choosing this attribute test
condition will be discussed later. For now, we will assume that this is the best criterion for
splitting the data at this point.
Hunt's algorithm is then applied recursively to each child of the root node. From the training set
given in Figure 4.6, notice that all borrowers who are home owners successfully repaid their
loans. The left child of the root is therefore a leaf node labeled Defaulted = No (see Figure
4.7(b)).
For the right child, we need to continue applying the recursive step of Hunt's algorithm until all
the records belong to the same class. The trees resulting from each recursive step are shown in
Figures 4.7(c) and (d).
A learning algorithm for inducing decision trees must address the following two issues.
1) Should the training records be split?
Each recursive step of the tree-growing process must select an attribute test condition to
divide the records into smaller subsets. To implement this step, the algorithm must
provide a method for specifying the test condition for different attribute types as well as
an objective measure for evaluating the goodness of each test condition.
Page 7
DATA DATA MINING AND DATA WAREHOUSING (15CS651) VI SEM CSE
Nominal Attributes :Since a nominal attribute can have many values, its test condition can be
expressed in two ways, as shown in Figure 4.9. For a multiway split (Figure 4.9(a)), the number
of outcomes depends on the number of distinct values for the corresponding attribute. For
example, if an attribute such as marital status has three distinct values-single, married, or
divorced-its test condition will produce a three-way split.
Figure 4.9(b) illustrates three different ways of grouping the attribute values for marital status
into two subsets.
Page 8
DATA DATA MINING AND DATA WAREHOUSING (15CS651) VI SEM CSE
Ordinal Attributes: Ordinal attributes can also produce binary or multiway splits. Ordinal
attribute values can be grouped as long as the grouping does not violate the order property of the
attribute values. Figure 4.10 illustrates various ways of splitting training records based on the
Shirt Size attribute.
The groupings shown in Figures 4.10(a) and (b) preserve the order among the attribute values,
whereas the grouping shown in Figure a.10(c) violates this property because it combines the
attribute values Small and Large into the same partition while Medium and Extra Large are
combined into another partition.
Page 9
DATA DATA MINING AND DATA WAREHOUSING (15CS651) VI SEM CSE
Continuous Attributes: For continuous attributes, the test condition can be expressed as a comparison
test (A < V) or (A>=V ,) with binary outcomes, or a range query with outcomes of the form
Vi<=A<Vi+1, for i=1,2…k. The difference between these approaches is shown in Figure 4.11.
Greedy approach:
- Nodes with homogeneous class distribution are preferred
Need a measure of node impurity:
C0: 5
Non-homogeneous, High degree of impurity
C1: 5
C0: 9
Homogeneous, Low degree of impurity C1: 1
• Gini Index
• Entropy
• Misclassification error
Page 10
DATA DATA MINING AND DATA WAREHOUSING (15CS651) VI SEM CSE
Where p(i|t) denote the fraction of records belonging to class i at a given node t and where c is
the number of classes.
The measures developed for selecting the best split are often based on the degree of impurity of
the child nodes. The smaller the degree of impurity, the more skewed the class distribution.
To determine how well a test condition performs, we need to compare the degree of impurity of
the parent node (before splitting) with the degree of impurity of the child nodes (after splitting).
The larger their difference, the better the test condition. The gain, is a criterion that can be used
to determine the goodness of a split.
Page 11
DATA DATA MINING AND DATA WAREHOUSING (15CS651) VI SEM CSE
✓ The presence of redundant attributes does not adversely affect the accuracy of decision
trees.
Disadvantages:
✓ Since most decision tree algorithms employ a top-down, recursive partitioning
approach,
the number of records becomes smaller as we traverse down the tree. At the leaf nodes,
Page 12
DATA DATA MINING AND DATA WAREHOUSING (15CS651) VI SEM CSE
the number of records may be too small to make a statistically significant decision about
the class representation of the nodes.
✓ A subtree can be replicated multiple times in a decision tree, as illustrated in Figure 4.19.
This makes the decision tree more complex than necessary and perhaps more difficult to
interpret. Such a situation can arise from decision tree implementations that rely on a
single attribute test condition at each internal node.
Exercises:
Page 13
DATA DATA MINING AND DATA WAREHOUSING (15CS651) VI SEM CSE
Page 14
DATA DATA MINING AND DATA WAREHOUSING (15CS651) VI SEM CSE
Page 15
DATA DATA MINING AND DATA WAREHOUSING (15CS651) VI SEM CSE
Page 16
DATA DATA MINING AND DATA WAREHOUSING (15CS651) VI SEM CSE
Page 17
DATA DATA MINING AND DATA WAREHOUSING (15CS651) VI SEM CSE
Page 18
DATA DATA MINING AND DATA WAREHOUSING (15CS651) VI SEM CSE
A good model must have low training error as well as low generalization error.
Page 19
DATA DATA MINING AND DATA WAREHOUSING (15CS651) VI SEM CSE
Underfitting : The training and test error rates of the model are large when the size of the tree is
very small. This situation is known as model underfitting.
Underfitting occurs because the model has yet to learn the true structure ofthe data. As a result, it
performs poorly on both the training and the test sets.
Overfitting.: As the number of nodes in the decision tree increases, the tree will have fewer
training and test error . However, once the tree becomes too large, its test error rate begins to
increase even though its training error rate continues to decrease. This phenomenon is known as
model over fitting.
Reasons for over fitting:
• Presence of Noise
• Lack of Representative Samples
Figure shows the training and test error rates of the decision tree.
Page 20
DATA DATA MINING AND DATA WAREHOUSING (15CS651) VI SEM CSE
Page 21
DATA DATA MINING AND DATA WAREHOUSING (15CS651) VI SEM CSE
Exercises:
Page 22
DATA DATA MINING AND DATA WAREHOUSING (15CS651) VI SEM CSE
Page 23
DATA DATA MINING AND DATA WAREHOUSING (15CS651) VI SEM CSE
Page 24
DATA DATA MINING AND DATA WAREHOUSING (15CS651) VI SEM CSE
Page 25
DATA DATA MINING AND DATA WAREHOUSING (15CS651) VI SEM CSE
Page 26
DATA DATA MINING AND DATA WAREHOUSING (15CS651) VI SEM CSE
Rule-Based Classifier
A rule-based classifier is a technique for classifying records using a collection of "if . . .then. . ."
rules.
The rules for the model are represented in a disjunctive normal form, . where R is known as the
rule set and r;'s are the classification rules or disjuncts
The left-hand side of the rule is called the rule antecedent or precondition.
The right-hand side of the rule is called the rule consequent, which contains the predicted class yi
Page 27
DATA DATA MINING AND DATA WAREHOUSING (15CS651) VI SEM CSE
Name Blood Type Give Birth Can Fly Live in Water Class
hawk warm no yes no ?
grizzly bear warm yes no no ?
Name Blood Type Give Birth Can Fly Live in Water Class
lemur warm yes no no ?
turtle cold no no sometimes ?
dogfish shark cold yes no yes ?
Page 28
Tid Refund AMarital I
Taxable A EHOUSING (15CS651) VI SEM CSE
Status Income Class
Mutually Exclusive Rules The rules in a rule set .R are mutually exclusive if no two rules in .R
are triggered by the same record. This property ensures that every record is covered by at most
one rule in R.
Exhaustive Rules A rule set -R has exhaustive coverage if there is a rule for each combination
of attribute values. This property ensures that every record is covered by at least one rule in -R
Ordered Rules In this approach, the rules in a rule set are ordered in decreasing order of their
priority, which can be defined in many ways (e.g., based on accuracy, coverage, total description
length, or the order in which the rules are generated). An ordered rule set is also known as a
decision list. When a test record is presented, it is classified by the highest-ranked rule that
covers the record. This avoids the problem of having conflicting classes predicted by multiple
classification rules
Page 29
DATA DATA MINING AND DATA WAREHOUSING (15CS651) VI SEM CSE
Rule-Ordering Schemes
Rule-based ordering
▪ This approach orders the individual rules by some rule quality measure.
▪ This ordering scheme ensures that every test record is classified by the
"best" rule covering it.
Class-based ordering
Rule Evaluation:
Page 30
DATA DATA MINING AND DATA WAREHOUSING (15CS651) VI SEM CSE
Page 31
DATA DATA MINING AND DATA WAREHOUSING (15CS651) VI SEM CSE
✓ Rule-based classifiers are generally used to produce descriptive models that are easier to
interpret, but gives comparable performance to the decision tree classifier.
Page 32
DATA DATA MINING AND DATA WAREHOUSING (15CS651) VI SEM CSE
Page 33
DATA DATA MINING AND DATA WAREHOUSING (15CS651) VI SEM CSE
Page 34
DATA DATA MINING AND DATA WAREHOUSING (15CS651) VI SEM CSE
Page 35
DATA DATA MINING AND DATA WAREHOUSING (15CS651) VI SEM CSE
Page 36
DATA DATA MINING AND DATA WAREHOUSING (15CS651) VI SEM CSE
Page 37
DATA DATA MINING AND DATA WAREHOUSING (15CS651) VI SEM CSE
Nearest-Neighbor Classifiers
Requires three things
- The set of stored records
- Distance Metric to compute distance between records
- The value of k, the number of nearest neighbors to retrieve
To classify an unknown record:
- Compute distance to other training records
- Identify k nearest neighbors
- Use class labels of nearest neighbors to determine the class label of unknown
record
- (e.g., by taking majority vote)
Unknown record
Sou
INI 15
X X X
K-nearest neighbors of a record x are data points that have the k smallest distance to x
- Euclidean distance
2
d(p,q) (p q)
i i
i
Determine the class from nearest neighbor list
- take the majority vote of class labels among the k-nearest neighbors
Choosing the value of k:
- If k is too small, sensitive to noise points
- If k is too large, neighborhood may include points from other classes
Characteristics of Nearest-Neighbor Classifiers:
Page 39
DATA DATA MINING AND DATA WAREHOUSING (15CS651) VI SEM CSE
Bayes’ Theorem:
Bayes’ theorem is a way to figure out conditional probability. Conditional probability is the
probability of an event happening, given that it has some relationship to one or more other
events.
P(A|C)P(C)
P(C | A)
P(A)
Page 40
DATA DATA MINING AND DATA WAREHOUSING (15CS651) VI SEM CSE
In a particular pain clinic, 10% of patients are prescribed narcotic pain killers. Overall, five
percent of the clinic’s patients are addicted to narcotics (including pain killers and illegal
substances). Out of all the people prescribed pain pills, 8% are addicts. If a patient is an addict,
what is the probability that they will be prescribed pain pills?
Step 1: Figure out what your event “A” is from the question. That information is in the
italicized part of this particular question. The event that happens first (A) is being prescribed pain
pills. That’s given as 10%.
Step 2: Figure out what your event “B” is from the question. That information is also in the
italicized part of this particular question. Event B is being an addict. That’s given as 5%. Step 3:
Figure out what the probability of event B (Step 2) given event A (Step 1). In other words,
find what (B|A) is. We want to know “Given that people are prescribed pain pills, what’s the
probability they are an addict?” That is given in the question as 8%, or .8.
Step 4: Insert your answers from Steps 1, 2 and 3 into the formula and solve. P(A|B)
= P(B|A) * P(A) / P(B) = (0.08 * 0.1)/0.05 = 0.16
The probability of an addict being prescribed pain pills is 0.16 (16%).
Page 41
DATA DATA MINING AND DATA WAREHOUSING (15CS651) VI SEM CSE
During the training phase, we need to learn the posterior probabilities P(Y/X) for every
combination of X and Y based on information gathered from the training data.
By knowing these probabilities, a test record X' can be classified by finding the class Y’
that maximizes the posterior probability, P(Y'/X').
To illustrate this approach, consider the task of predicting whether a loan borrower will default
on their payments.
Figure 5.9 shows a training set with the following attributes: House Owner, Marital Status, and
Annual Income. Loan borrowers who defaulted on their payments are classified as Yes, while
those who repaid their loans are classified as No
Page 42
DATA DATA MINING AND DATA WAREHOUSING (15CS651) VI SEM CSE
Suppose we are given a test record with the following attribute set:
X :(Home Owner : No, Marital Status : Married, Annual Income : $120K).
To classify the record, we need to compute the posterior probabilities P(Yes/X) and P(No/X)
based on information available in the training data.
If P(Yes/X) > P(No/X), then the record is classified as Yes; otherwise, it is classified as No
Page 43
DATA DATA MINING AND DATA WAREHOUSING (15CS651) VI SEM CSE
Page 44
DATA DATA MINING AND DATA WAREHOUSING (15CS651) VI SEM CSE
Page 45
DATA DATA MINING AND DATA WAREHOUSING (15CS651) VI SEM CSE
where n is the total number of instances from class Yj, nc is the number of training examples
from class Yi that take on the value Xi, m is a parameter known as the equivalent sample size,
and p is a user-specified parameter.
Page 46
DATA DATA MINING AND DATA WAREHOUSING (15CS651) VI SEM CSE
Page 47
DATA DATA MINING AND DATA WAREHOUSING (15CS651) VI SEM CSE
(b) Use the estimate of conditional probabilities given in the previous question (a) to predict the
class label for a test sample (A = 0,B = 1, C = 0) using the naıve Bayes approach.
Page 48
DATA DATA MINING AND DATA WAREHOUSING (15CS651) VI SEM CSE
Page 49
DATA DATA MINING AND DATA WAREHOUSING (15CS651) VI SEM CSE
➢ Naive Bayes classifiers can also handle missing values by ignoring the example during
model building and classification.
➢ They are robust to irrelevant attributes.
➢ Correlated attributes can degrade the performance of naive Bayes classifiers because the
conditional independence assumption no longer holds for such attributes.
Consider three random variables,A, B, and C, in which A and B are independent variables and
each has a direct influence on a third variable, C.
The relationships among the variables can be summarized into the directed acyclic graph shown
in Figure 5.12(a).
Each node in the graph represents a variable, and each arc asserts the dependence relationship
between the pair of variables. If there is a directed arc from X to Y, then X is the parent of Y and
Y is the child of X. F\rrthermore, if there is a directed path in the network from X to Z, then X is
an ancestor of Z, whlle Z is a descendant of X.
For example, in the diagram shown in Figure 5.12(b), A is a descendant of D and D is an
ancestor of B. Both B and D arc also non-descendants of A.
Page 50
DATA DATA MINING AND DATA WAREHOUSING (15CS651) VI SEM CSE
In the diagram shown in Figure 5.12(b), A is conditionally independent of both B and D given C
because the nodes for B and D are non-descendants of node A.
The conditional independence assumption made by a naive Bayes classifier can also be
represented using a Bayesian network, as shown in Figure 5.12(c), where gr is the target class
and {Xt,Xz,...,Xa} is the attribute set.
Besides the conditional independence conditions imposed by the network topology, each node is
also associated with a probability table.
1. If a node X does not have any parents, then the table contains only the prior probability P(X).
2. If a node X has only one parent, Y, then the table contains the conditional probability P(XIY).
3. If a node X has multiple parents, {Y1,,Y2, . . . ,Yn}, then the table contains the conditional
probability P(XlY1,Y2,. . ., Yn.).
Page 51
DATA DATA MINING AND DATA WAREHOUSING (15CS651) VI SEM CSE
Page 52
DATA DATA MINING AND DATA WAREHOUSING (15CS651) VI SEM CSE
EXAMPLE:
You have a new burglar alarm installed at home.
✓ It is fairly reliable at detecting burglary, but also sometimes responds to minor
earthquakes.
✓ You have two neighbors, Ali and Veli, who promised to call you at work when they
hear
the alarm.
✓ Ali always calls when he hears the alarm, but sometimes confuses telephone ringing with
the alarm and calls too.
✓ Veli likes loud music and sometimes misses the alarm.
✓ Given the evidence of who has or has not called, we would like to estimate the
probability of a burglary.
Page 53
DATA DATA MINING AND DATA WAREHOUSING (15CS651) VI SEM CSE
Characteristics of BBN
Following are some of the general characteristics of the BBN method:
✓ BBN provides an approach for capturing the prior knowledge of a particular domain
using a graphical model. The network can also be used to encode causal dependencies
among variables.
✓ Constructing the network can be time consuming and requires a large amount of effort.
However, once the structure of the network has been determined, adding a new variable
is quite straightforward.
✓ Bayesian networks are well suited to dealing with incomplete data. Instances with
missing attributes can be handled by summing or integrating the probabilities over all
possible values of the attribute.
✓ Because the data is combined probabilistically with prior knowledge, the method is
quite
robust to model over fitting.
Page 54
DATA DATA MINING AND DATA WAREHOUSING (15CS651) VI SEM CSE
QUESTIONS:
1. What is classification. Explain the general approach for solving a classification problem
with an example.
2. How decision trees are used for classification. Explain decision tree induction algorithm
for classification.
3. Write Hunts algorithm and illustrate it’s working.
4. Explain the Methods for Expressing Attribute Test Conditions.
5. Explain various measures for selecting the best split with an example.
6. Explain the importance of evaluation criterion for classification methods.
7. Explain the characteristics of decision tree Induction.
8. Explain Model Over fitting. What are the reasons for overfitting? How to address
overfitting problems
9. Explain how to estimate generalization errors.
10. List characteristics of decision tree induction.
11. Give the difference between rule-based ordering and class-based ordering scheme.
12. Explain rule-based classifier and its characteristics.
13. Explain the characteristics of rule based classifier
14. How to improve accuracy of classification. Explain
15. Explain k-nearest neighbor classification algorithm.
16. Explain any characteristics of the nearest neighbor classifier.
17. What is Baye’s theorem? Show how it is used for classification.
18. Explain with an example how naïve Baye ‘s algorithm used for classification.
20. Discuss the two common strategies for growing a classification rule.
21. Explain sequential covering algorithm for rule extraction.
22. Explain model building in Bayesian networks.
Page 55
DATA DATA MINING AND DATA WAREHOUSING (15CS651) VI SEM CSE
Page 56
S
S
S
S
S
S
S
S
S
S
S
S
S
S
S
S
S
S
S
S
S
S
S
S
S