

SVM has successfully been applied to handwritten digit recognition, text classification, speaker identification etc.Īfter completing the reading for this lesson, please finish the Quiz and R Lab on Canvas (check the course schedule for due dates). SVM is comparatively less prone to overfitting. As a training algorithm, SVM may not be very fast compared to some other classification methods, but owing to its ability to model complex nonlinear boundaries, SVM has high accuracy. The SVM algorithm finds this hyperplane using support vectors and margins. With an appropriate nonlinear mapping to a sufficiently high dimension, data from two classes can always be separated by a hyperplane. Then it searches for the linear optimal separating hyperplane. If such a hyperplane exists, the work is done! If such a hyperplane does not exist, SVM uses a nonlinear mapping to transform the training data into a higher dimension. To put it in a nutshell, this algorithm looks for a linearly separable hyperplane, or a decision boundary separating members of one class from the other. It is a supervised learning algorithm which can be used to solve both classification and regression problem, even though the current focus is on classification only. In later years, the model has evolved considerably into one of the most flexible and effective machine learning tools available. Support vector machines are a class of statistical models first developed in the mid-1960s by Vladimir Vapnik.

Textbook reading: Chapter 9: Support Vector Machines (exclude Section 9.5).
