In: Computer Science
A classification technique (or classifier) is a systematic approach to building classification models from an input data set. Examples include decision tree classifiers, rule-based classifiers, neural networks, support vector machines, and naïve Bayes classifiers. In your own words, describe each of these techniques and provide a scenario in which each technique would be most appropriate.
Use your textbook and outside resources in the formulation of your response. Cite the sources you use to make your response.
Decision tree
Decision tree learning uses a call tree as a prophetical model that maps observations regarding AN item to conclusions regarding the item's target worth. it's one among the prophetical modeling approaches employed in statistics, data processing and machine learning. a lot of descriptive names for such tree models ar classification trees or regression trees. In these tree structures, leaves represent category labels and branches represent conjunctions of options that cause those category labels. In call analysis, a call tree is wont to visually and expressly represent selections and deciding. In data processing, a call tree describes knowledge however not decisions; rather the ensuing classification tree is AN input for deciding. This page deals with call
trees in knowledge.
There ar several specific decision-tree algorithms. Notable ones include:
ID3[6][7] (Iterative Dichotomiser 3)
C4.5 (successor of ID3)
CART (Classification And Regression Tree)
CHAID. Performs multi-level splits once
computing classification trees.
MARS: extends call trees to raised handle numerical knowledge.
Multi-layer perceptron
A multilayer perceptron (MLP) could be a feed forward artificial neural network model that maps sets of computer file onto a collection of acceptable outputs. A MLP consists of multiple layers of nodes in a very directed graph, with every layer absolutely connected to subsequent one. aside from the input nodes, every node could be a somatic cell (or process element) with a nonlinear activation perform. MLP utilizes a supervised learning technique known as back propagation for coaching the network.[1][2] MLP could be a modification of the quality linear perceptron and may distinguish knowledge that aren't linearly severable
SVM
In this section, we tend to study Support Vector Machines, a promising new technique for the classification of each linear and nonlinear knowledge. in a very shell, a support vector machine (or SVM) is AN formula that works as follows. It uses a nonlinear mapping to rework the first coaching knowledge into the next dimension [8]. at intervals this new dimension, it searches for the linear optimum separating hyper plane (that is, a “decision boundary “separating the tuples of 1 category from another). With AN acceptable nonlinear mapping to a sufficiently high dimension, knowledge from 2 categories will invariably be separated by a hyper plane. The SVM finds this hyper plane victimization support vectors (“essential” coaching tuples) and margins (defined by the support vectors).We will turn over a lot of into these new ideas more below
Bayesian classifiers
Bayesian classifiers ar applied mathematics classifiers. they'll predict category membership possibilities, like the likelihood that a given tuple belongs to a specific category. Bayesian classification relies on Bayes’ theorem, delineated below. Studies scrutiny classification algorithms have found an easy Bayesian classifier referred to as the naïve Bayesian classifier to be comparable in performance with call tree and hand-picked neural network classifiers. Bayesian classifiers have additionally exhibited high accuracy and speed once applied to massive databases. Naïve Bayesian classifiers assume that the result of AN attribute worth on a given classis freelance of the values of the opposite attributes [8]. This assumption is termed category conditional independence. it's created to change the computations concerned and, during this sense, is taken into account “naïve.” Bayesian belief networks ar graphical models, that not like naïve Bayesian classifiers enable the illustration of dependencies among subsets of attributes. Bayesian belief networks also can be used for classification.