2. Decision Tree Induction
• A decision tree is a flow-chart like structure, where each
internal node(non-leaf node) denotes a test on an attribute.
• Each branch represents an outcome of the test
• And each leaf node(terminal node) holds a class label.
• The topmost node in a tree is the root node.
4. Why are decision tree classifiers so
popular?
• It does not require any domain knowledge.
• Decision trees can handle multi-dimensional data.
• It is easy to comprehend.
• The learning and classification steps of a decision tree are
simple and fast.
Applications:
Applications of decision tree induction include
astronomy, financial analysis, medical diagnosis,
manufacturing and production, molecular biology.
5. Decision Tree Algorithms
• CART (Classification And Regression Trees)
• ID3 (Iterative Dichotomiser)
In the late 1970s and early 1980s, J.Ross Quinlan, a researcher
in machine learning developed a decision tree algorithm for
machine learning.
Later, he presented C4.5, which was the successor of ID3.
ID3 and C4.5 and CART adopt a greedy(non-backtracking)
approach in which decision trees are constructed in a top-
down recursive divide-and-conquer manner.
6. Decision Tree Algorithm
The strategy for the algorithm is as follows:
(1) The algorithm is called with three parameters: attribute list, attribute
selection method and data partition.
(2) Initially, data partition is the complete set of training tuples and their
associated class labels. The attribute list describes the attributes of the
training set tuples.
RID Age Student Credit_rati
ng
Buys
1 Youth Yes Fair Yes
2 Youth Yes Fair Yes
3 Youth Yes Fair No
4 Youth no Fair No
5 Middle No Excellent Yes
6 Senior Yes Fair No
Class
label
7. Decision Tree Algorithm
(3) The attribute selection method describes the method for selecting the
best attribute for discrimination among tuples. The methods used for
attribute selection can either be Information Gain or Gini Index. The
structure of the tree (binary or non-binary) is decided by the attribute
selection method.
(4) The tree starts as a single node representing the training tuples in data
partition.
Age
youth
middle
senior
RID class
1 Yes
2 Yes
3 No
4 no
RID class
5 yes
RID class
6 No
8. Decision Tree Induction
(5) If the tuples in the Data Partition are all of the same class, then node
becomes a leaf and is labeled with that class. (terminating condition)
(6) otherwise, the attribute selection method is called to determine the
splitting criterion.
(7) The algorithm uses the same process recursively to form a decision tree
for the tuples at each resulting partition.
(8) The recursive partitioning stops only when any one of the following
terminating conditions is true:
9. Decision Tree Induction
(i) all the tuples in partition belong to the same class.
(ii) There are no remaining attributes on which the tuples
may be further partitioned. In this case, majority voting is
employed. This involves converting node into a leaf and
labeling it with the most common class in partition.
(iii) There are no tuples for a given branch, in this case also,
a leaf is created with the majority class in partition.
(9) The resulting decision tree is returned.
11. Tree Pruning
• An attempt to improve accuracy.
• Tree pruning is performed in order to remove
anomalies the method to reduce the
unwanted branches of the tree. This will
reduce the complexity of the tree and help in
effective predictive analysis. It reduces the
overfitting as it removes the unimportant
branches from the trees.
12. Bayesian Classification
• Bayesian classifiers are statistical classifiers.
• They can predict class membership probabilities such as the
probability that a given tuple belongs to a particular class.
• Bayesian classification is based on Bayes’ Theorem.
• Bayesian classifiers have also exhibited high accuracy and
speed when applied to large databases.
13. Bayes’ Theorem
• Bayes theorem is named after Thomas Bales who did early work in probability
and decision theory during 18th century.
• Let X be a data tuple. In bayesian terms X is considered as “evidence”. Let H
be hypothesis such that the data tuple belong to a specified class C.
• P(H|X) is the posterior probability that the hypothesis H holds the evidence or
data tuple X. Or, the probability that X belongs to a specified class C.
e.g. data tuples comprise of attributes, age and income. X is of 35 years with an
income of $40,000.
H is hypothesis that X will buy computer or not.
P(H|X) is the probability that X will buy computer given his age and income.
• P(H) is the prior probability.
e.g. probability that X will buy computer or not, regardless of age and income.
i.e. , P(H) is independent of X.
14. Bayes’ Theorem
• P(X|H) is the posterior probability (likelihood) that the customer X is of 35 years and earns
$40,000 given that we know that X will buy computer.
• P(H) is the prior probability (marginal).
e.g. probability that X is of 35years and earns $40,000, regardless he will buy computer or not.
Bayes’ Theorem is given by
P(H|X) =
e.g. P(Queen|Face) = P(face|queen) P(queen) / P(face)
= (1 * 4/52 ) / (12/52)
= 1/3
= 33.33%