- Decision tree algorithm falls under the category of supervised
learning. They can be used to solve both regression and
classification problems.
- Decision tree uses the tree representation to solve the problem
in which each leaf node corresponds to a class label and attributes
are represented on the internal node of the tree.
- We can represent any boolean function on discrete attributes
using the decision tree.
Below are some assumptions that we made while using
decision tree:
- At the beginning, we consider the whole training set as the
root.
- Feature values are preferred to be categorical. If the values
are continuous then they are discretized prior to building the
model.
- On the basis of attribute values records are distributed
recursively.
- We use statistical methods for ordering attributes as root or
the internal node.
As you can see from the above image that Decision Tree works on
the Sum of Product form which is also known as Disjunctive
Normal Form. In the above image, we are predicting the use of
computer in the daily life of the people.
Decision trees are commonly
used in operations research, specifically in
decision analysis, to help identify a strategy
most likely to reach a goal, but are also a popular tool in machine
learning.