Recently Facebook hired Vapnik, the father of the Support Vector Machine (SVM) and the Vapnik-Chrevoniks Statistical Learning Theory.
Lesser known, Vapnik has also pioneered methods of Transductive and Universum Learning. Most recently, however, he has developed the theory of the Learning Using Privileged Information (LUPI), also known as the SVM+ method.
And this has generated a lot of interest in the community to develop numerical implementations.
In this post, we examine this most recent contribution.
Once again, we consider the problem of trying to learn a good classifier having a small set of labeled examples .
Here, we assume we have multiple views of the data; that is, different, statistically independent feature sets , that describe the data.
For example, we might have a set of breast cancer images, and some holistic, textual description provided by a doctor. Or, we might want to classify web pages, using both the text and the hyperlinks.
Multi-View Learning is a big topic in machine learning, and includes methods like SemiSupervised CoTraining, Multiple Kernel Learning, Weighted SVMs, etc. We might even consider Ensemble methods as a kind of Multi-View Learning.
Having any extra, independent info about our data is very powerful.
In LUPI, also called the SVM+ method, we consider situation where we only have extra information about the labeled examples .
Vapnik calls Privileged Information — and shows us how to use it
The VC Bounds
If we are going to discuss Vapnik we need to mention the VC theory. Here, we note an important result about soft and hard-margin SVMs: when the training set is non-separable, the VC bounds scale as :
where is the training error, and h is the VC dimension
But when the training set is separable, i.e. , the VC bounds scale as
This means we can use say L=300 labeled examples as opposed to L=900,000, which is a huge difference.
It turns out we can also achieve scaling–if we have an Oracle that tells us, a priori, the slacks. Hence the term Oracle SVM. The Oracle tells us how much confidence we have in each label.
So if we can get the slacks , or, equivalently, the weights, for each instance, L can be much smaller.
[Of course, this is only true if we can sample all the relevant features.]
Soft Margin SVM
In practice, we usually assume data is noisy and non-separable. So we use a soft-margin SVM where we can adjust the amount slack with the (cost) parameter,
Note that we also have the bias , but we can usually set to zero (still, be careful doing this!).
Let us write the soft-margin SVM optimization as
subject to the constraints
Notice that while we formally add slack variables to max-margin optimization–they eventually vanish. That is, we don’t estimate the slacks; we replace them with the maximum margin violation, or the Hinge Loss error
We then solve the unconstrained soft-margin SVM optimization, which is a convex upper bound to the problem stated above–as explained in this nice presentation on LibLinear.
or, in simpler terms
And this works great–when we have lots of labeled data L.
What if we could actually estimate or assign all the L slack variables ? In principle, we can far use labeled examples L. This is the promise of LUPI.
In LUPI, we use the privileged information to learn the slack variables :
We model the slack as linear functions of the privileged information.
Effectively, the privileged information provides a measure of confidence for each labeled instance.
To avoid overtraining, we apply a max-margin approach. This leads to an extended, SVM+ optimization problem, with only 2 adjustable regularization parameters (and 2 bias params):
subject to the constraints
Show me the code?
The SVM+/LUPI problem is a convex (quadratic) optimization problem with linear constraints–although it does require a custom solver. The current approach is to solve the dual problem using a variant of SMO. There are a couple versions (below) in Matlab and Lua. The Matlab code includes related SVM+MTL (SVM+ MultiTask Learning). I am unaware of a open source C or python implementation similar to Svmlin or Universvm.
To overcome the solver complexity, other methods, such as the Learning to Rank method, have been proposed. We will keep an eye out for useful open source platforms that everyone can benefit from.
The SVM+/LUPI formalism demonstrates a critical point about modern machine learning research. If we have multiple views of our data, extra information available during training (or maybe during testing), or just several large Hilbert spaces of features, then we can frequently do better than just dumping everything into an SVM, holding our nose, and turning the crank.
or how many ways can we name the same method?