Kernels Part 1: What is an RBF Kernel? Really?

Posted on February 6, 2012

13


My first blog on machine learning is to discuss a pet peeve I have about working in the industry, namely why not to apply an RBF kernel to text classification tasks.

I wrote this as a follow up to a Quora Answer on the subject:

http://www.quora.com/Machine-Learning/How-does-one-decide-on-which-kernel-to-choose-for-an-SVM-RBF-vs-linear-vs-poly-kernel

I will eventually re-write this entry once I get better at Latex.  For now, refer to 

Smola, Scholkopf, and Muller, The connection between regularization operators and support vector kernels  http://cbio.ensmp.fr/~jvert/svn/bibli/local/Smola1998connection.pdf

I expand on one point–why not to use Radial Basis Function (RBF) Kernels for Text Classification.  I encountered this  while a consultant a few years ago eBay, where not one but 3 of the teams (local, German, and Indian) were all doing this, with no success  They are were treating a multi-class text classification problem using an SVM with an RBF Kernel.  What is worse, they were claiming the RBF calculations would take upto 2 weeks to run, whereas I could run a linear SVM in a few minutes on my desktop.  At the time, it seemed to me that anyone who simply read the SVM papers could see this was foolish–and still…

So lets look at the SVM optimization problem  and see why RBF Kernels do not apply to text problems.

Suppose we have m documents / training instances (x < X) and  labels (y < Y).  We seek a to learn a function f that maps X to Y

An SVM searches for a hyperplane that separates the data.  If we write the hyperplane as f(x), then we seek  optimal function f such that

This is obtained by minimizing the norm of f subject to the constraint that the positively labeled stay on one side of the hyperplane, and the negatively labelled instances on the other.  This leads to  the following constrained optimization problem

Most data is not perfectly separable, so we can add some slack to the problem by also minimizing the Hinge Loss function

leading the to complete, soft-margin, SVM optimization problem (constraints not shown):

More generally in machine learning, we want to learn a  function f that maps instances (X) into labels (Y)  that  minimizes the expected value of our loss function

where L is defined as a Hinge Loss for Classification problems, and a Squared Loss for continuous Regression problems

We define a general, convex functional to minimize the linear Regularized Risk functional:

which may or may not be subject to (linear or quadratic) constraints, and where the norm of f is defined as the dot product for the Hilbert Space F, f < F (in Dirac notation)

Typically in Machine Learning, we define some abstract kernel function k_x such that

(Note that k_x plays the role similar to the Dirac delta function, for cases where the set of functions  f are not orthonormal).  This leads to a general solution to the convex optimization problem

Really this just says our solution can be expanded in a basis k_x.  However, we don’t know anything about k_x yet, and that’s the problem we are trying to shed light on.

At this point, to make things more confusing, one typically uses the Kernel trick to introduce a Kernel (K) over a space of L2 functions such that the norm of f may be expressed in a more familiar Hilbert space:

This abstract form leads people to believe that one can just choose any vanilla Kernel and apply to any problem without further though.  To get beyond this obfuscated interpretation, we need to understand   how K acts on f. And to do this, as seen above, we need the inverse of K — also known as its Green’s function g:

and explicitly write the norm of f as:

We now introduce the Regularization operator P over the space of L2 functions such that P*P is the Green’s function for K:

With P, we can now write the new, Kernelized optimization problem over an explicit class of L2 functions

We can now see that to find an explicit form of the RBF Kernel, we want to find the corresponding Regularization operator that acts on our original data points.

First, we note that when k(x,x’) is frequently function a single variable (translation invariant), then

So let us define k(x) as a simple Gaussian function (the RBF basis)

If we take the Fourier transform of k(x), we have k(w) in frequency space as

We can now find the RBF Regularization operator as the Weierstrass transform of the norm of f (also known as the Gaussian Blur function  , a low band pass filter) , expressed in frequency space (note w >= 0)

where the operators O is a combination of Laplacian and Differential operators

So there we have it…the RBF Kernel is nothing more than a low-band pass filter, well known in Signal Processing as a tool to smooth images.  The RBF Kernel acts as a prior that selects out smooth solutions.  So the question is…does this apply to text or not…

(stay tuned)

About these ads
Posted in: Uncategorized