In: Math
Please write down your understanding of binary class linear
SVMs. Please cover the following subtopics:
1)The primal form and its dual form for both hard margin and soft
margin case;
2) Concept of support vectors
3) Why max-margin is good;
4) Concepts of generalisation/test error;
Background
There are many different kinds of machine learning algorithms to discover actionable insights from data. These different kinds of machine learning algorithms can be classified into two, based on the approach they “learn”.
1) Supervised Learning
2) Unsupervised Learning
The vast area of machine learning uses supervised learning. Supervised learning is where you have input variables (x) and an output variable (Y) and you use an algorithm to learn the mapping function from the input to the output Y = f(X). The goal is to approximate the mapping function so well that when you have new input data (x) that you can predict the output variables (Y) for that data.
Classification is also one such Supervised Learning technique where the trained with the algorithm so that it can classify the new data point with a high probability of classifying it correctly.
Example:
SVM (Support Vector Machine) can be used for both binary class( two classes) and multi-class(more than two classes) problems. When we use a support vector machine (SVM) when our data has exactly two classes then it is called Binary class SVM. An SVM classifies data by finding the best hyperplane that separates all data points of one class from those of the other class. The best hyperplane for an SVM means the one with the largest margin between the two classes. Margin means the maximal width of the slab parallel to the hyperplane that has no interior data points.
The support vectors are the data points that are closest to the separating hyperplane; these points are on the boundary of the slab. The following figure illustrates these definitions, with + indicating data points of type 1, and – indicating data points of type –1.