Why Deep Learning Works III: a preview

A preview of my upcoming talk at mmds this week:

Stay tuned for a video link, and a blog post to accompany this.

Comments, questions, and bug fixes are welcome.

Tensorflow Reproductions: Big Deep Simple MNIST

I am starting a new project to try and reproduce some core deep learning papers in TensorFlow from some of the big names.

The motivation:   to understand how to build very deep networks and why they do (or don’t) work.

There are several papers that caught my eye, starting with

These papers set the foundation for looking at much larger, deeper networks such as

FractalNet’s are particularly interesting since they suggest that very deep networks do not need student-teacher learning, and, instead, can be self similar.  (which is related to very recent work on the Statistical Physics of Deep Learning, and the Renormalization Group analogy).

IMHO,  it is not enough just to implement the code; the results have to be excellent as well. I am not impressed with the results I have seen so far, and I would like to flush out what is really going on.

Big Deep Simple Nets

The 2010 paper still appears to be 1 of the top 10 results on MNIST:

http://rodrigob.github.io/are_we_there_yet/build/classification_datasets_results.html

The idea is simple. They claim to get state-of-the-art accuracy on MNIST using a 5-layer MLP, but running a large number of epochs with just SGD, a decaying learning rate, and an augmented data set.

The key idea is that the augmented data set can provide, in practice, an infinite amount of training data.  And having infinite data means that we never have to worry about overtraining because we have too many adjustable parameters, and therefore any reasonable size network will do the trick if we just run it long enough.

In other words, there is no convolution gap,  no need for early stopping, or really no regularization at all.

This sounds dubious to me, but I wanted to see for myself.  Also, perhaps I am missing some subtle detail.  Did they clip gradients somewhere ?   Is the activation function central ?  Do we need to tune the learning rate decay ?

I  have initial notebooks on github,  and would welcome feedback and contributions, plus ideas for other papers to reproduce.

I am trying to repeat this experiment using Tensorflow and 2 kinds of augmented data sets:

• InfiMNIST (2006) – provides nearly 1B deformations of MNIST
• AlignMNIST (2016) – provides 75-150 epochs of deformed MNIST

(and let me say a special personal thanks to Søren Hauberg for providing this recent data set)

I would like to try other methods, such as the Keras Data Augmentation library (see below), or even the recent data generation library coming out of OpenAI.

Current results are up for

The initial results indicate that AlignMNIST is much better that InfiMNIST for this simple MLP, although I still do not see the extremely high, top-10 accuracy reported.

Furthermore, the 5-Layer InfiMNIST actually diverges after ~100 epochs.  So we still need early stopping, even with an infinite amount of data.

It may be interesting try using the Keras ImageDataGenerator class, described in this related blog on “building powerful image classification models using very little data

Also note that the OpenAI group as released a new paper and code for creating data used in generative adversarial networks (GANs).

I will periodically update this blog as new data comes in, and I have the time to implement these newer techniques.

Next, we will check in the log files and discuss the tensorboard results.

Comments, criticisms, and contributions are very welcome.

Ask me anything … about machine learning

Today I am running an experiment and opening my blog  to machine learning questions.

Please ask questions in the comments section.

I will try to answer them in the blog  over the weekend of the next week or so.

Thanks in advance.

When Regularization Fails

Another Holiday Blog:  Feedback and Questions are very welcome.

I have a client who is using Naive Bayes pretty successfully, and the subject has come up as to ‘why do we need fancy machine learning methods?’   A typical question I am asked is:

Why do we need Regularization ?

What happens if we just turn the regularizer off ?

Can we just set it to a very small, default value ?

After all, we can just measure what customers are doing exactly.  Why not just reshow the most popular items, ads, etc to them. Can we just run a Regression?  Or Naive Bayes ?  Or just use the raw empirical estimates?

Do we Need Hundreds of Classifiers to Solve Real World Classification Problems?

Sure, if you are simply estimating historical frequencies,  n-gram statistics, etc, then a simple method like Naive Bayes is fine. But to predict the future..

the Reason we need Regularization is for Generalization.

After all, we don’t just want to make the same recommendations over and over.

Unless you are just in love with ad re-targeting (sic).   We want to predict on things we have never seen.

And we need to correct for presentation bias when collecting data on things we have seen.

Most importantly, we need methods that don’t break down unexpectedly.

In this post, we will see why Regularization is subtle and non-trivial even in seemingly simple linear models like the classic Tikhonov Regularization.   And something that is almost never discussed in modern classes.

We demonstrate that ‘strong’  overtraining is accompanied by a Phase Transition–and optimal solutions lies just below this.

The example is actually motivated by something I saw recently in my consulting practice–and it was not obvious at first.

Still,  please realize–we are about to discuss a pathological case hidden in the dark corners of Tikhonov Regularization–a curiosity for mathematicians.

See this Quora post on SVD in NLP for some additional motivations.

Let’s begin when Vapnik [1], Smale [2], and the other great minds of the field begin:

(Regularized) Linear Regression:

The most basic method in all statistics is linear regression

$\mathbf{X}\mathbf{w}=\mathbf{y}$.

It is attributed to Gauss, one of the greatest mathematicians ever.

One solution is to assume Gaussian noise and minimize the error.

The solution can also be constructed using the Moore-Penrose PseudoInverse.

If we multiply by  $\mathbf{X}^{\dagger}$ to the left on both sides, we obtain

$\mathbf{X}^{\dagger}\mathbf{X}\mathbf{w}=\mathbf{X}^{\dagger}\mathbf{y}$

The formal solution is

$\mathbf{\hat{w}}=(\mathbf{X}^{\dagger}\mathbf{X})^{-1}\mathbf{X}^{\dagger}\mathbf{y}$

which is only valid when we can actually invert the data covariance matrix $\mathbf{X}^{\dagger}\mathbf{X}$.

We say the problem is well-posed when $(\mathbf{X}^{\dagger}\mathbf{X})^{-1}$

1.     exists,
2.   is unique, and
3. is stable.

Otherwise we say it is singular, or ill-posed [1].

In nearly all practical methods, we would not compute the inverse directly, but use some iterative technique to solve for $\mathbf{\hat{w}}$. In nearly all practical cases, even when the matrix seems invertible, or even well posed, it may have numerical instabilities, either due to the data or the numerical solver.  Vapnik refers to these as stochastically ill posed problems [4]

Frequently XTX can not be inverted..or should not be inverted..directly.

Tikhonov Regularization

A  trivial solution is to simply ‘regularize’ the matrix $\mathbf{X}^{\dagger}\mathbf{X}$ by adding a small, non-zero value to the diagonal:

$(\mathbf{X}^{\dagger}\mathbf{X})^{-1}\rightarrow(\mathbf{X}^{\dagger}\mathbf{X}-\lambda\mathbf{I})^{-1}$.

Naively, this seems like it would dampen instabilities and allow for a robust numerical solution.  And it does…in most cases.

If you want to sound like you are doing some fancy math, give it a Russian name; we call this Tikhonov Regularization [4]:

$\mathbf{\hat{w}}(\lambda)=(\mathbf{X}^{\dagger}\mathbf{X}-\lambda\mathbf{I})^{-1})\mathbf{X}^{\dagger}\mathbf{y}$

This is (also) called Ridge Regression in many common packages such as scikit learn.

The parameter $\lambda$ is the regularization parameter values, usually found with Cross Validation (CV).  While a data science best practice, CV can fail !

Grid searching $\lambda$, we can obtain the infamous L-curve:

The L-curve is a log-log plot of the the norm of the regularized solution vs. the norm of the residual.  The best $\lambda$ lies somewhere along the bottom of the L, but we can’t really tell where (even with cross validation).

This challenge of regularization is absent from basic machine learning classes?

The optimal $\lambda$ is actually hard to find.   Cross Validation (CV) only works in ideal cases.  And we need a CV metric, like $R^{2}$, which also only works in ideal cases.

We might naively imagine $\lambda$ can just be very small–in effect, turning off the regularizer.  This is seen in the curve above, where the solution norm diverges.

The regularization fails in these 2 notable cases, when

1.  the model errors are correlated,  which fools simple cross validation
2.  $\lambda\rightarrow 0$ and the number of features ~ the number of training instances, which leads to a phase transition.

Today we will examine this curious, spurious behavior in case 2 (and look briefly at 1)

Regimes of Applications

As von Neumann said once, “With four parameters I can fit an elephant, and with five I can make him wiggle his trunk”

To understand where the regularization can fail, we need to distinguish between the cases in which Ridge Regression is commonly used.

Say we have $N_{f}$ features, and $N_{I}$ instances.  We distinguish between the High Bias and High Variance regimes.

High Variance: $N_{f}\gg N_{I}$

This regime is overdetermined, complicated models that are subject to overtraining.   We find multiple solutions which satisfy our constraints.

Most big data machine learning problems lie here.  We might have 100M documents and 100K words.  Or 1M images and simply all pixels as the (base) features.

Regularization lets us pick the best of solution which is the most smooth (L2 norm), or the most sparse (L1 norm), or maybe even the one with the least presentation bias (i.e. using Variance regularization to implement Counterfactural Risk Minimization

High Bias regime: $N_{f}\ll N_{I}$

This is the Underdetermined regime, and any solution we find, regularized or not, will most likely  generalize poorly.

When we have more features than instances, there is no solution at all (let alone a unique one)

Still, we can pick a regularizer, and the effect is similar to dimensional reduction. Tikhonov Regularization is similar truncated SVD (explained below)

Any solution we find may work, but the predictions will be strongly biased towards the training data.

In between these 2 limits is an seemingly harmless case, however, this is really a

Danger Zone:  $N_{f}\approx N_{I}$

This is a rare case where the number of features = the number of instances.

This does come up in practical problems, such as classifying a large number of small text phrases.  (Something I have been working on with one of my larger clients)

The number of phrases ~ the number of unique words.

This is a dangerous case … not only because it seems so simple … but because the general theory breaks down.  Fortunately, it is only pathologicial in

The limit of zero regularization

We examine how these different regimes behave for small values of $\lambda$

$\underset{\lambda\rightarrow 0}{\lim}\;(\mathbf{X}^{\dagger}\mathbf{X}-\lambda\mathbf{I})^{-1}\mathbf{X}^{\dagger}\mathbf{y}$

Let’s formalize and consider the how the predicted accuracy behaves.

The Setup:  An Analytical Formulation

We analyze our regression under Gaussian noise $\mathbf{\xi}$.  Let

$y=\mathbf{X}\mathbf{w}*+\mathbf{\xi}$

where $\mathbf{\xi}$ is a Gaussian random variable with unit mean and variance $\sigma^{2}$

$\mathbf{\xi}=N(0,\sigma^{2})$

This simple model lets us analyze predicted Accuracy analytically.

Let

$\mathbf{\hat{w}}$ be the optimal Estimate, and

$\mathbf{w^{*}}$ be the Ground Truth, and

$\mathbf{\bar{w}}=\mathbb{E}_{\xi}[\mathbf{\hat{w}}]$ be expected (mean).

We would like define, and the decompose, the Generalization Accuracy into Bias and Variance contributions.

Second, to derive the Generalization Error, we need to work out how the estimator behaves as a random variable. Frequently, in Machine Learning research, one examines the worst case scenario.  Alternatively, we use the average case.

Define the Estimation Error

$Err(\mathbf{\hat{w}})=\mathbb{E}_{\xi}\Vert\mathbf{\hat{w}}-\mathbf{w^{*}}\Vert^{2}$

Notice that by Generalization Error, we usually we want to know how our model performs on a hold set or test point $\mathbf{x}$:

$\mathbb{E}_{\mathbf{\xi}}[(\mathbf{x^{\dagger}}(\hat{\mathbf{w}}-\mathbf{w^{*}}))^{2}]$

In the Appendix, we work out the generalization bounds for the worst case and the average case.  From here on, however, when we refer to Generalization Error, we mean the average case Estimation Error (above).

Bias-Variance Tradeoff

For Ridge Regression, the mean estimator is

$\mathbf{\bar{w}}=(\mathbf{X^{\dagger}X}-\lambda\mathbf{I})^{-1}\mathbf{X^{\dagger}X}\mathbf{w^{*}}$

and its variation is

$\mathbf{\hat{w}}-\mathbf{\bar{w}}=(\mathbf{X^{\dagger}X}-\lambda\mathbf{I})^{-1}\mathbf{X^{\dagger}}\mathbf{\xi}$

We can now decompose the Estimation Error into 2 terms:

$\mathbb{E}_{\xi}=\mathbb{E}_{\xi}\Vert\mathbf{\hat{w}}-\mathbf{\bar{w}}\Vert^{2}+\Vert\mathbf{\bar{w}}-\mathbf{w^{*}}\Vert^{2}\;\;$,  where

$\Vert\mathbf{\bar{w}}-\mathbf{w^{*}}\Vert$ is the Bias, and

$\mathbb{E}_{\xi}\Vert\mathbf{\hat{w}}-\mathbf{\bar{w}}\Vert^{2}$  is the Variance,

and examine each regime in the limit $\lambda\rightarrow 0$

The Bias is just the error of the average estimator:

$\Vert\mathbf{\bar{w}}-\mathbf{w^{*}}\Vert=\lambda^{2}\Vert((\mathbf{X}^{\dagger}\mathbf{X}-\lambda\mathbf{I})\mathbf{w^{*}}\Vert^{2}$

and the Variance is the trace of the Covariance of the estimator

$\mathbb{E}_{\xi}\Vert\mathbf{\hat{w}}-\mathbf{\bar{w}}\Vert^{2}=\sigma^{2}\;Tr\bigg(\dfrac{\mathbf{X}^{\dagger}\mathbf{X}}{(\mathbf{X}^{\dagger}\mathbf{X}+\lambda\mathbf{I})^{2}}\bigg)$

We can examine the limiting behavior of these statistics by looking at the leading terms in a series expansion for each. Consider the Singular Value Decomposition (SVD) of $X$

$\mathbf{X}=\mathbf{USV}$.

Let $\{s_{1},s_{2},\cdots,s_{m}\}$ be the positive singular values, and $\{\mathbf{v}_{i}\}$ be the right singular vectors.

We can write the Bias as

$\Vert\mathbf{\bar{w}}-\mathbf{w^{*}}\Vert=\sum\limits_{1}^{N_{f}}\big(\dfrac{\lambda\mathbf{v}_{i}^{\dagger}\mathbf{w}^{*}}{(s_{i}^{2}+\lambda)}\big)^{2}$

With no regularization, and the number of features exceeds the number of instances, there is a strong bias, determined by the ‘extra’ singular values

$\lim\limits_{\lambda\rightarrow 0}\Vert\mathbf{\bar{w}}-\mathbf{w^{*}}\Vert=\sum\limits_{N_{i}+1}^{N_{f}}(\mathbf{v}_{i}^{\dagger}\mathbf{w}^{*})^{2}\;\;when\;\;N_{f}\gg N_{I}$

Otherwise, the Bias vanishes.

$\lim\limits_{\lambda\rightarrow 0}\Vert\mathbf{\bar{w}}-\mathbf{w^{*}}\Vert=0\;\;when\;\;N_{f}\le N_{I}$

So the Bias appears to only matter in the underdetermined case.

(Actually, this is misleading; in the overdetermined case, we can introduce a bias when tuning the regularization parameter too high)

In contrast, the Variance

$\mathbb{E}_{\xi}\Vert\mathbf{\hat{w}}-\mathbf{\bar{w}}\Vert^{2}=\sigma^{2}\sum\limits_{1}^{m}\dfrac{s_{i}^{2}}{(s^{2}_{i}+\lambda)^{2}}$

$\lim\limits_{\lambda\rightarrow 0}\mathbb{E}_{\xi}\Vert\mathbf{\hat{w}}-\mathbf{\bar{w}}\Vert^{2}=\sum\limits_{1}^{m}s_{i}^{-2}$

can diverge if the minimal singular value is small:

$\mathbb{E}_{\xi}\Vert\mathbf{\hat{w}}-\mathbf{\bar{w}}\Vert^{2}\rightarrow\infty \;\;when\;\; s_{0}\sim 0$

That is,  if the singular values decrease too fast, the variance can explode.

And this can happen in the danger zone

When $N_{f}\sim N_{i}$ the variance can be infinite!

Which also means the Central Limit Theorem (CLT) breaks down…at least how we normally use it.

Traditionally, the (classical) CLT says that the infinite sum of i.i.d. random variables $\{x_{i}\}$ converges to a Gaussian distribution–when the variance of the ${x_{i}}$ is finite.  We can, however, generalize the CLT to show that, when the variance is infinite, the sum converges to a Levy or $\alpha$-stable, power-law distribution.

These power-law distributions are quite interesting–and arise frequently in chemistry and physics during a phase transition. But first, let’s see the divergent behavior actually arise.

When More is Less:  Singularities

Ryota Tomioka [3] has worked some excellent examples using standard data sets

[I will work up some ipython notebook examples soon]

The Spambase dataset is a great example.  It has $N_{f}=57$ features, and $N_{i}=4601$ instances. We run a Ridge Regression, with $N_{i}\in(0,4601]$, with $\lambda=10^{-6}$.

As the data set size $N_{f}$ increases, the accuracy sharps to zero at $N_{f}=N_{f}=57$, and then increases sharply again, saturating at $latex N_{f}\sim 10^{3}$.

Many other datasets show this anomalous behavior at $N_{f}=N_{f}$, as predicted by our analysis above.

Ridge Regression vs Logistic Regression: Getting out of the Hole

Where Ridge Regression fails, Logistic Regression bounces back.  Below we show some examples of RR vs LR in the $N_{f}=N_{f}$ danger zone

Logistic Regression still has problems, but it no where near as pathological as RR.

The reason for this is subtle.

• In Regression, the variance goes to infinity
• In Classification, the norm $\Vert\mathbf{\hat{w}}\Vert$ goes to infinity

Traditional ML Theory:  Isn’t the Variance Bounded?

Traditional theory (very losely) bounds for quantities like the generalization error and (therefore) the variance.

We work out some of examples in the Appendix.

Clearly these bounds don’t work in all cases.  So what do they really tell us?  What’s the point?

These theories gives us some confidence that we can actually apply a learning algorithm–but only when the regularizer is not too small and the noise is not too large.

They essentially try to get us far away from the pathological phase transition that arises when the variance diverges.

Cross Validation and Correlated Errors

In practice, we would never just cross our fingers set $\lambda=0.00000001$.

We pick a hold out set and run Generalized Cross Validation (GCV) . Yet this can fail when

1.  the model errors $\xi$ are not i.i.d. but are strongly correlated
2.  we don’t know what accuracy metric to use?

Hansen [5, 7] has discussed these issue in detail…and he provides an alternative method, the L-curve, which attempt to balance the size of the regularized solution and the residuals.

For example, we can sketch the relative errors $\dfrac{\Vert w-w(\lambda)\Vert}{\Vert w\Vert_{2}}$ for the L-curve and GVC method (see 7).  Above we see that the L-curve relative errors are more Gaussian than GCV.

In other words, while most packages provide a small choice of regression metrics; as data scientists, the defaults like $R^{2}$ may not be good enough.  Even using the 2-norm may not represent the errors well. As data scientists, we may need to develop our own metrics to get a good $\lambda$ for a regression.  (or maybe just use an SVM regression).

Hansen claims that,”experiments confirm that whenever GCV finds a good regularization parameter, the corresponding solution is located at the corner of the L-curve.” [7]

This means that the optimal regularized solution lives ‘just below’ the phase transition, where the norm diverges.

Phase Transitions in Regularized Regression

What do we mean by a phase transition?

The simplest thing is to we plot a graph and look a steep cliff or sharp change.

Here, generalization accuracies drops off suddenly as $\dfrac{N_{f}}{N_{i}}\rightarrow 1\;\;and\;\;\alpha\rightarrow 0$

In a physical phase transition,the fluctuations (i.e. variances and norms) in the system approach infinity.

Imagine watching a pot boil

We see bubbles of all sizes, small and large.  The variation in bubble size is all over the map.  This is characteristic of a phase transition: i.e. water to gas.

When we cook, however, we frequently simmer our foods–we keep the water just below the boiling point.  This is as hot as the water can get before it changes to gas.

Likewise, it appears that in Ridge Regression, we frequently operate at a point just below the phase transition–where the solution norm explodes.  And this is quite interesting.  And, I suspect, may be important generally.

In chemical physics, we need special techniques to treat this regime, such as the Renormalization Group. Amazingly, Deep Learning / Restricted Boltzmann Machines  look very similar to the Variational Renormalization Group method.

In our next post, we will examine this idea further, by looking at the phase diagram of the traditional Hopfield Neural Networ, and the idea of Replica Symmetry Breaking in the Statistical Mechanics of Generalization. Stay tuned !

References

1. Vapnik and Izmailov, V -Matrix Method of Solving Statistical Inference Problems, JMLR 2015

2. Smale, On the Mathematical Foundations of Learning, 2002

3. https://github.com/ryotat

Appendix:  some math

Bias-Variance Decomposition

$\mathbf{y}=\mathbf{X}{w}+\mathbf{\xi}$,

when $\mathbb{E}[\mathbf{\xi}]=0$,

then $Cov(\mathbf{\xi})=\sigma^{2}\mathbf{I}$

Let

$\mathbf{\Sigma}=\dfrac{1}{N_{i}}\mathbf{X^{\dagger}X}$

be the data covariance matrix.  And , for convenience, let

$\lambda_{n}=\dfrac{\lambda}{n}$.

The Expected value of the optimal estimator, assuming Gaussian noise, is given using the Penrose Pseudo-Inverse

$\mathbb{E}\mathbf{\hat{x}}=(\mathbf{\Sigma}-\lambda_{n}\mathbf{I})^{-1}\mathbf{\Sigma}\mathbf{w^{*}}$

The Covariance is

$Cov(\mathbf{\hat{x}})=\dfrac{\sigma^{2}}{n}(\mathbf{\Sigma}-\lambda_{n}\mathbf{I})^{-1}\mathbf{\Sigma}(\mathbf{\Sigma}-\lambda_{n}\mathbf{I})^{-1}$

The regularized Covariance matrix arises so frequently that we will assign it a symbol

$\mathbf{\Sigma}_{\lambda}=(\mathbf{\Sigma}-\lambda_{n}\mathbf{I})^{-1}$

We can now write down the mean and covariance

$\mathbb{E}_{\xi}\Vert\hat{w}-\bar{w}\Vert^{2}=Tr(Cov(\mathbf{\hat{w}}))$

[more work needed here]

Average and Worst Case Generalization Bounds

Consider the Generalization Error for a test point $\mathbf{x}$

$\mathbb{E}_{\mathbf{\xi}}[(\mathbf{x^{\dagger}}(\hat{\mathbf{w}}-\mathbf{w^{*}}))^{2}] =\lambda^{2}_{n}[\mathbf{x^{\dagger}}\mathbf{\Sigma}_{\lambda_{n}}^{-1}\mathbf{w}]^{2}+\dfrac{\sigma^{2}_{n}}{n}\mathbf{x^{\dagger}}\mathbf{\Sigma}_{\lambda_{n}}^{-1}\mathbf{\Sigma}\mathbf{\Sigma}_{\lambda_{n}}^{-1}\mathbf{w}$

We can now obtain some very simple bounds on the Generalization Accuracy (just  like a bona-a-fide ML researcher)

The worst case bounds

$[\mathbf{x^{\dagger}}\mathbf{\Sigma}_{\lambda_{n}}^{-1}\mathbf{w}]^{2}\le\Vert\mathbf{\Sigma}_{\lambda_{n}}^{-1}\mathbf{x}\Vert^{2}\Vert\mathbf{w^{*}}\Vert^{2}$

The average case bounds

$\mathbb{E}_{\mathbf{w^{*}}}[\mathbf{x^{\dagger}}\mathbf{\Sigma}_{\lambda_{n}}^{-1}\mathbf{w}]^{2}\le\alpha^{-1}\Vert\mathbf{\Sigma}_{\lambda_{n}}^{-1}\mathbf{x}\Vert^{2}$,

where we assume the ‘covariance*’ is

$\mathbb{E}_{\mathbf{w^{*}}}[\mathbf{w^{*}w^{*\dagger}}]=\alpha^{-1}\mathbf{I}$

Discrete Picard Condition

We can use the SVD approach to make stronger statements about our ability to solve the inversion problem

$\mathbf{Xw}=\mathbf{y}$.

I will sketch an idea called the “Discrete Picard Condition” and provide some intuition

Write $\mathbf{X}$ as a sum over singular vectors

$\mathbf{X}=\mathbf{U\Sigma V^{\dagger}}=\sum\limits_{i=1}^{N}\mathbf{u_{i}}\sigma_{i}\mathbf{v_{i}^{\dagger}}$.

Introduce an abstract PseudoInverse,  defined by

$\mathbf{X}^{-\dagger}\mathbf{X}=\mathbf{I}$

Express the formal solution as

$\mathbf{w}=\mathbf{X}^{-\dagger}\mathbf{y}$

Imagine the SVD of the PseudoInverse is

$\mathbf{X}^{-\dagger}=\mathbf{U^{\dagger}\Sigma^{-1}V}=\sum\limits_{i=1}^{N}\mathbf{u_{i}^{\dagger}}\sigma^{-1}_{i}\mathbf{v_{i}}$.

and use it to obtain

$\mathbf{w}=\sum\limits_{i=1}^{N}\dfrac{\mathbf{u^{\dagger}_{i}y}}{\sigma_{i}}\mathbf{v_{i}}$.

For this expression to be meaningful, the SVD coefficients $\mathbf{u^{\dagger}_{i}b}$ must decay, on average, faster than the corresponding singular values $\sigma{i}$.

Consequently, the only coefficients that carry any information are larger than the noise level.  The small coefficients have small singular values and are dominated by noise.    As the problem becomes more difficult to solve uniquely, the singular values decay more rapidly.

If we discard the $k+1,N$ singular components,  we obtain a regularized, unsupervised solution called

Truncated SVD

$\mathbf{w}=\sum\limits_{i=1}^{k}\dfrac{\mathbf{u^{\dagger}_{i}y}}{\sigma_{i}}\mathbf{v_{i}}$.

which is similar in spirit to spectral clustering.

Why Deep Learning Works II: the Renormalization Group

Deep Learning is amazing.  But why is Deep Learning so successful?  Is Deep Learning just old-school Neural Networks on modern hardware?  Is it just that we have so much data now the methods work better?  Is Deep Learning just a really good at finding features. Researchers are working hard to sort this out.

Recently it has been shown that [1]

Unsupervised Deep Learning implements the Kadanoff Real Space Variational Renormalization Group (1975)

This means the success of Deep Learning is intimately related to some very deep and subtle ideas from Theoretical Physics.  In this post we examine this.

Unsupervised Deep Learning: AutoEncoder Flow Map

An AutoEncoder is a Unsupervised Deep Learning algorithm that learns how to represent an complex image or other data structure $X$.   There are several kinds of AutoEncoders; we care about so-called Neural Encoders–those using Deep Learning techniques to reconstruct the data:

The simplest Neural Encoder is a Restricted Boltzman Machine (RBM).  An RBM is non-linear, recursive, lossy function $f(X)$ that maps the data $X$ from visible nodes ${v}$ into hidden nodes ${h}$:

The RBM is learned by selecting the optimal parameters ${b_{v}},{c_{h}},{w_{v,h}}$that minimize some measure of the reconstruction error (see Training RBMs, below)

$\min |\Vert f(X)-X\Vert$

RBMs and other Deep Learning algos are formulated using classical Statistical Mechanics.  And that is where it gets interesting!

Multi Scale Feature Learning

In machine learning (ML), we map (visible) data into (hidden) features

$\mathbf{v}(X)\rightarrow\mathbf{h}$

The hidden units discover features at a coarser grain level of scale

With RBMs, when features are complex, we may stack them into a Deep Belief Network (DBM), so that we can learn at different levels of scale

and leads to multi-scale features in each layer

Deep Belief Networks are a Theory of Unsupervised MultiScale Feature Learning

Fixed Points and Flow Maps

We call $f(X)$ a flow map

$f(X)\rightarrow X$

eIf we apply the flow map to the data repeatedly, (we hope) it converges to a fixed point

$\lim\limits_{n}f^{n}(f^{n-1}(\cdots(f^{1}(f^{0}(X))))\rightarrow f_{\infty}(X)$

Notice that we usually expect to apply the same map each time $f^{n}(x)=f(x)$, however, for a computational theory we may need more flexibility.

Example: Linear Flow Map

The simplest example of a flow map is the simple linear map

$X\rightarrow CX$

so that

$f(X)\sim CX$

where C is a non-negative, low rank matrix

$\mid C\mid\,\ll\,\mid X\mid$

We have seen this before: this leads to a Convex form of NonNegative Matrix Factorization NMF

Convex NMF applies when we can specify the feature space and where the data naturally clusters.  Here, there are a few instances that are archetypes that define the convex hull of the data.

$\{\mathbf{x}_{c}\}\in X\,,c=1,\cdots,\mid C\mid$

Amazingly, many clustering problems are provably convex–but that’s a story for another post.

Example: Manifold Learning

Near a fixed point, we commonly approximate the flow map by a linear operator

$f_{\infty}(X) \sim\mathbf{L}(X)$

This lets us capture the structure of the true data manifold, and is usually described by the low lying eigen-spectra of

$\mathbf{L}(X)=\sum\limits_{i=1}\lambda_{i}\hat{\mathbf{v}}_{i}\sim\lambda_{0}\hat{\mathbf{v}}_{0}+\lambda_{1}\hat{\mathbf{v}}_{1}+\cdots$.

In the same spirit,  Semi & Unsupervised Manifold Learning, we model the data using a Laplacian operator $\mathbf{L}(\sigma)$, usually parameterized by a single scale parameter $\sigma$.

These methods include Spectral Clustering, Manifold Regularization , Laplacian SVM, etc.  Note that manifold learning methods, like the Manifold Tanget Classifier,  employ Contractive Auto Encoders and use several scale parameters to capture the local structure of the data manifold.

The Renormalization Group

In chemistry and physics, we frequently encounter problems that require a multi-scale description.   We need this for critical points and phase transitions, for natural crashes like earthquakes and avalanches,  for polymers and other macromolecules, for strongly correlated electronic systems, for quantum field theory, and, now, for Deep Learning.

A unifying idea across these systems is the Renormalization Group (RG) Theory.

Renormalization Group Theory is both a conceptual framework on how to think about physics on multiple scales as well as a technical & computational problem solving tool.

Ken Wilson won the 1982 Nobel Prize in Physics for the development and application of his Momentum Space RG theory to phase transitions.

We used RG theory to model the recent BitCoin crash as a phase transition.

Wilson invented modern multi-scale modeling; the so-called Wilson Basis was an early form of Wavelets.  Wilson was also a big advocate of using supercomputers for solving problems.  Being a Nobel Laureate, he had great success promoting scientific computing.  It was thanks to him I had access to a Cray Y-MP when I was in high school because he was a professor at my undergrad, The Ohio State University.

Here is the idea.  Consider a feature map  which transforms the data $X$ to a different, more coarse grain scale

$x\rightarrow\phi_{\lambda}(x)$

The RG theory requires that the Free Energy $F(x)$ is rescaled, to reflect that

the Free Energy is both Size-Extensive and Scale Invariant near a Critical Point

This is not obvious — but it is essential to both having a conceptual understanding of complex, multi scale phenomena, and it is necessary to obtain very highly accurate numerical calculations.  In fact, being size extensive and/or size consistent is absolutely necessary for highly accurate quantum chemistry calculations of strongly correlated systems.  So it is pretty amazing but perhaps not surprising that this is necessary for large scale deep learning calculations also!

The Fundamental Renormalization Group Equation (RGE)

$\mathcal{F}(x)=g(x)+\dfrac{1}{\lambda}\mathcal{F}(\phi_{\lambda}(x))$

If we (can) apply the same map, $F(x)$, repeatedly, we obtain a RG recursion relation, which is the starting point for most analytic work in theoretical physics.   It is usually difficult to obtain an exact solution to the RGE (although it is illuminating when possible [20]).

Many RG formulations both approximate the exact RGE and/or only include relevant variables. To describe a multiscale system, it is essential to distinguish between these relevant and irrelevant variables.

Example: Linear Rescaling

Let’s say the feature map is a simple linear rescaling

$\phi(x)=\lambda x$

We can obtain a very elegant, approximate RG solution where F(x) obeys a complex (or log-periodic) power law.

$\mathcal{F}(x)\sim x^{-(\alpha+i\beta)}$

This behavior is thought to characterize Per-Bak style Self-Organized Criticality (SOC), which appears in many natural systems–and perhaps even in the brain itself.   Which leads to the argument that perhaps Deep Learning and Real Learning work so well because they operate like a system just near a phase transition–also known as the Sand Pile Model--operating at a state between order and chaos.

the Kadanoff Variational Renormalization Group (1975)

Leo Kadanoff, now at the University of Chicago, invented some of the early ideas in Renormalization Group.  He is most famous for the Real Space formulation of RG, sometimes called the Block Spin approach.  He also developed an alternative approach, called the Variational Renormalization Group (VRG, 1975), which is, remarkably, what Unsupervised RBMs are implementing!

Let’s consider a traditional Neural Network–a Hopfield Associative Memory (HAM).  This is also known as an Ising model or a Spin Glass in statistical physics.

An HAM consists of only visible units; it stores memories explicitly and directly in them:

We specify the Energy — called the Hamiltonian $\mathcal{H}$ — for the nodes.  Note that all the nodes are visible.  We write

$\mathcal{H}^{HAM}=-\sum\limits_{i}B_{i}v_{i}-\sum\limits_{i}J_{i,j}v_{i}v_{j}$

The Hopfield model has only single $B_{i}$ and pair-wise $J_{i,j}$ interactions.

A general Hamiltonian might have many-body, multi-scale interactions:

$\mathcal{H}(v)=-\sum\limits_{i}K_{i}v_{i}-\sum\limits_{i}K_{i,j}v_{i}v_{j}-\sum\limits_{i,j,k}K_{i,j,k}v_{i}v_{j}v_{k}-\cdots$

The Partition Function is given as

$\mathcal{Z}=\sum\limits_{v}e^{-\mathcal{H}(v)}$

And the Free Energy is

$\mathcal{F}^{v}=-\ln\mathcal{Z}=-\ln\sum\limits_{v}e^{-\mathcal{H}(v)}$

The idea was to mimic how our neurons were thought to store memories–although perhaps our neurons do not even do this.

Either way, Hopfield Neural Networks have many problems; most notably they may learn spurious patterns that never appeared in the training set. So they are pretty bad memories.

Hinton created the modern RBM to overcome the problems of the Hopfield model.  He used hidden units to represent the features in the data–not to memorize the data examples directly.

An RBM is specified Energy function for both the visible and hidden units

$\mathbf{E}(v,h)=\mathbf{v}^{t}\mathbf{b}+\mathbf{v}^{t}\mathbf{W}\mathbf{h}+\mathbf{c}^{t}\mathbf{h}$

This also defines joint probability of simultaenously observing a configuration of hidden and visible spins

$P(v,h)=\dfrac{e^{-\mathbf{E(v,h)}}}{\mathcal{Z}}$

which is learned variationally, by minimizing the reconstruction error…or the cross entropy (KL divergence), plus some regularization (Dropout), using Greedy layer-wise unsupervised training, with the Contrastive Divergence (CD or PCD) algo, …

The specific details of an RBM Energy are not addressed by these general concepts; these details do not affect these arguments–although clearly they matter in practice !

It turns out that

Introducing Hidden Units in a Neural Network is a Scale Renormalization.

When changing scale, we obtain an Effective Hamiltonian $\tilde{\mathcal{H}}$ that acts on a the new feature space (i.e the hidden units)

$\mathcal{H}(v)\rightarrow\tilde{\mathcal{H}}(h)$

or, in operator form

$\tilde{\mathcal{H}}(h)=\mathbb{R}[\mathcal{H}(v)]$

This Effective Hamiltonian is not specified explicitly, but we know it can take the general form (of a spin funnel, actually)

$\tilde{\mathcal{H}}(h)=-\sum\limits_{i}\tilde{K}_{i}h_{i}-\sum\limits_{i}\tilde{K}_{i,j}h_{i}h_{j}-\sum\limits_{i,j,k}\tilde{K}_{i,j,k}h_{i}h_{j}h_{k}\cdots$

The RG transform preservers the Free Energy (when properly rescaled):

$\mathcal{F}^{h}\sim\mathcal{F}^{v}$

where

$\mathcal{F}^{h}=-\ln\sum\limits_{v}e^{\mathcal{H}(h)}$

$\mathcal{F}^{v}=-\ln\sum\limits_{v}e^{-\tilde{\mathcal{H}}(h)}$

Critical Trajectories and Renormalized Manifolds

The RG theory provides a way to iteratively update, or renormalize, the system Hamiltonian.  Each time we add a layer of hidden units (h1, h2, …), we have

$\mathcal{H}^{RG}(\mathbf{v},\mathbf{h1})=\mathbb{R}[\mathcal{H}(\mathbf{v})]$

$\mathcal{H}^{RG}(\mathbf{v},\mathbf{h1},\mathbf{h2})=\mathbb{R}[\mathbb{R}[\mathcal{H}(\mathbf{v})]$

$\cdots$

We imagine that the flow map is attracted to a Critical Trajectory which naturally leads the algorithm to the fixed point.  At each step, when we apply another RG transform, we obtain a new, Renormalized Manifold, each one closer to the optimal data manifold.

Conceptually, the RG flow map is most useful when applied to critical phenomena–physical systems and/or simple models that undergo a phase transition.  And, as importantly, the small changes in the data should ‘wash away’ as noise and not affect the macroscopic / critical phenomena. Many systems–but not all–display this.

Where Hopfield Nets fail to be useful here, RBMs and Deep Learning systems shine.

We now show that these RG transformations are achieved by stacking RBMs and solving the RBM inference problem!

Kadanoff’s Variational Renormalization Group

As in many physics problems, we break the modeling problem into two parts:  one we know how to solve, and one we need to guess.

1. we know the Hamiltonian at the most fine grained level of scale  $\mathcal{H}(v)$
2. we seek the correlation $\mathbf{V}(v,h)$ that couples to the next level scale

The joint Hamiltonian, or Energy function, is then given by

$\mathcal{H}(\mathbf{v,h})=\mathcal{H}(\mathbf{v})+\mathbf{V(v,h)}$

The Correlation V(v,h) is defined so that the partition function $\mathcal{Z}$ is not changed

$\sum\limits_{h}e^{-\mathbf{V}\mathbf{(v,h})}=1$

This gives us

$\mathcal{Z}=\sum_{v}e^{-\mathcal{H}(v)}=\sum\limits_{v}\sum\limits_{h}e^{-\mathbf{V}(v,h)}e^{-\mathcal{H}(v)}$

(Sometimes the Correlation V is called a Transfer Operator T, where V(v,h)=-T(v,h) )

We may now define a renormalized effective Hamilonian $\tilde{\mathcal{H}(h)}$ that acts only on the hidden nodes

$\tilde{\mathcal{H}}(h)=\ln\sum\limits_{v}e^{-\mathbf{V}(v,h)}e^{-\mathcal{H}(v)}$

so that we may write

$\mathcal{Z}=\sum\limits_{h}e^{-\tilde{\mathcal{H}}(h)}$

Because the partition function does not change, the Exact RGE preserves the Free Energy (up to a scale change, we we subsume into $\mathcal{Z})$

$\Delta\tilde{\mathcal{F}}=\tilde{\mathcal{F}}^{h}-\mathcal{F}^{v}=0$

We generally can not solve the exact RGE–but we can try to minimize this Free Energy difference.

What Kadanoff showed, way back in 1975, is that we can accurately approximate the Exact Renormalization Group Equation by finding a lower bound using this formalism

Deep learning appears to be a real-space variational RG technique, specifically applicable to very complex, inhomogenous systems where the detailed scale transformations have to be learned from the data

RBMs expressed using Variational RG

We will now show how to express RBMs using the VRG formalism and provide some intuition

In an RBM, we simply want to learn the Energy function directly; we don’t specify the Hamiltonian for the visible or hidden units explicitly, like we would in physics.  The RBM Energy is just

$\mathbf{E}^{RBM}(v,h)=\mathcal{H}^{RBM}(v)-\mathbf{V}(v,h)$

We identify the Hamiltonian for the hidden units as the Renormalized Effective Hamiltonian from the VRG theory

$\mathbf{H}^{RBM}(h)=\hat{\mathcal{H}}(h)$

RBM Hamiltonians / Marginal Probabilities

To obtain RBM Hamiltonians for just the visible $\mathcal{H}^{RBM}(v)$ or hidden $\mathcal{H}^{RBM}(h)$ nodes, we need to integrate out the other nodes; that is, we need to find the marginal probabilities.

$P(v)=\sum\limits_{h}P(v,h)=\dfrac{e^{-\mathcal{H}^{RBM}(v) }}{\mathcal{Z}}=\dfrac{1}{\mathcal{Z}} \sum\limits_{h}e^{-\mathbf{E(v,h)}}$

$\mathcal{H}^{RBM}(v)=-\ln\sum\limits_{h}e^{-\mathbf{E(v,h)}}$

and

$P(h)=\sum\limits_{v}P(v,h)=\dfrac{e^{-\mathcal{H}^{RBM}(h) }}{\mathcal{Z}}=\sum\limits_{v}\dfrac{e^{-\mathbf{E(v,h)}}}{\mathcal{Z}}$

$\mathcal{H}^{RBM}(h)=-\ln\sum\limits_{v}e^{-\mathbf{E(v,h)}}$

Training RBMs

To train an RBM, we apply Contrastive Divergence (CD), or, perhaps today, Persistent Contrastive Divergence (PCD).  We can kindof think of this as slowly approximating

$\dfrac{\partial}{\partial\theta}\ln\mathcal{Z}(\theta)$

In practice, however, RBM training minimizes the associated Free Energy difference $\Delta\mathbf{F}$ … or something akin to this…to avoid overfitting.

In the “Practical Guide to Training Restricted Boltzmann Machines”, Hinton explains how to train an RBM (circa 2011).  Section 6 addresses “Monitoring the overfitting”

“it is possible to directly monitor the overfitting by comparing the free energies of training data and held out validation data…If the model is not overfitting at all, the average free energy should be about the same on training and validation data”

Other Objective Functions

Modern variants of Real Space VRG are not  “‘forced’ to minimize the global free energy” and have attempted other approaches such as Tensor-SVD Renormalization.  Likeswise, some RBM / DBM approaches do likewise may minimize a different objective.

In some methods, we minimize the KL Divergence; this has a very natural analog in VRG language [1].

Why Deep Learning Works: Lessons from Theoretical Physics

The Renormalization Group Theory provides new insights as to why Deep Learning works so amazingly well.  It is not, however, a complete theory. Rather, it is framework for beginning to understand what is an incredibly powerful, modern, applied tool.  Enjoy!

References

[10] http://www-math.unice.fr/~patras/CargeseConference/ACQFT09_JZinnJustin.pdf

[14] THE RENORMALIZATION GROUP AND CRITICAL PHENOMENA, Ken Wilson Nobel Prize Lecture

[16] The Potential Energy of an Autoencoder, 2014

Why does Deep Learning work?

Why does Deep Learning work?

This is the big question on everyone’s mind these days.  C’mon we all know the answer already:

“the long-term behavior of certain neural network models are governed by the statistical mechanism of infinite-range Ising spin-glass Hamiltonians” [1]   In other words,

Multilayer Neural Networks are just Spin Glasses?  Right?

This is kinda true–depending on what you mean by a spin glass.

In a recent paper by LeCun, he attempts to extend our understanding of training neural networks by studying the SGD approach to solving the multilayer Neural Network optimization problem [1].   Furthermore, he claims

None of these works however make the attempt to explain the paradigm of optimizing the highly non-convex neural network objective function through the prism of spin-glass theory and thus in this respect our approach is very novel.  And, again, this is kinda true

But here’s the thing…we already have a good idea of what the Energy Landscape of multiscale spin glass models* look like–from early theoretical protein folding work by Peter Wolynes to modern work by Ken Dill, etc [2,3,4]).  In fact, here is a typical surface:

*[technically these are Ising spin models with multi-spin interactions]

Let us consider the nodes, which above represent partially folded states, as nodes in a multiscale spin glass–or , say, a multilayer neural network.  Immediately we see the analogy and the appearance of the ‘Energy funnel’ In fact, researchers have studied these ‘folding funnels’ of spin glass models over 20 years ago [2,3,4].  And we knew then that

as we increase the network size, the funnel gets sharper

Note: the Wolynes protein-folding spin-glass model is significantly different from the p-spin Hopfield model that LeCun discusses because it contains multi-scale, multi-spin interactions.  These details matter.

Spin glasses and spin funnels are quite different.  Spin glasses are highly non-convex with lots of local minima, saddle points, etc.  Spin funnels, however, are designed to find the spin glass of minimal-frustration, and have a convex, funnel shaped, energy landscape.

Spin Funnels have been characterized and cataloged by theoretical chemists and can take many different convex shapes

This seemed to be necessary to resolve one of the great mysteries of protein folding: Levinthal’s paradox [5].  If nature just used statistical sampling to fold a protein, it would take longer than the ‘known’ lifetime of the Universe.  It is why Machine Learning is not just statistics.

Deep Learning Networks are (probably) Spin Funnels

So with a surface like this, it is not so surprising that an SGD method might be able to find the Energy minima (called the Native State in protein folding theory). We just need to jump around until we reach the top of the funnel, and then it is a straight shot down.  This, in fact, defines a so-called ‘folding funnel’ [4]

So is not surprising at all that SGD may work.

Recent research at Google and Stanford confirms that the Deep Learning Energy Landscapes appear to be roughly convex [6], as does LeCuns work on spin glasses.

Note that a real theory of protein folding, which would actually be able to fold a protein correctly (i.e. Freed’s approach [7]), would be a lot more detailed than a simple spin glass model.  Likewise, real Deep Learning systems are going to have a lot more engineering details–to avoid overtraining (Dropout, Pooling, Momentum) than a theoretical spin funnel.

[Indeed, what is unclear is what happens at the bottom of the funnel.  Does the system exhibit a spin glass transition (with full blown Replica Symmetry Breaking, as LeCun suggests), or is the optimal solution more like a simple native state defined by but a few low-energy configurations ? Do DL systems display a phase transition, and is it first order like protein folding?  We will address these details in a future post.]  In any case case,

It is not that Deep Learning is non-convex–is that we need to avoid over-training

And it seems modern systems can do this using a few simple tricks, like Rectified Linear Units, Dropout, etc.

Hopefully we can learn something using the techniques developed to study the energy landscape of   multi-scale spin glass/ spin funnels models. [8,9], thereby utilizing methods  theoretical chemistry and condensed matter physics.

In a future post, we will look in detail at the folding funnel including details such as misfolding,  the spin glass transition in the funnel, and the relationship to symmetry breaking, overtraining, and LeCun’s very recent analysis.  These are non-trivial issues.

I believe this is the first conjecture that Supervised Deep Learning is related to a Spin Funnel.   In the next post, I will examine the relationship between Unsupervised Deep Learning and the Variational Renormalization Group [10].

Learn more about us at:   http://calculationconsulting.com

[1] LeCun et. al.,  The Loss Surfaces of Multilayer Networks, 2015

[3] THEORY OF PROTEIN FOLDING: The Energy Landscape Perspective, Annu. Rev. Phys. Chem. 1997

[5] From Levinthal to pathways to funnels, Nature, 1997

[6] QUALITATIVELY CHARACTERIZING NEURAL NETWORK OPTIMIZATION PROBLEMS, Google Research (2015)

[8] Funnels in Energy Landscapes, 2007

[10] A Common Logic to Seeing Cats and Cosmos, 2014

Convex Relaxations of Transductive Learning

This post is finishing up some older work of an R&D / coding project in Transductive Learning.

Why are SVMs interesting?  It is just a better way to do Logistic Regression?  Is it the Kernel Trick?  And does this even matter now that Deep Learning is everywhere? To the beginning student of machine learning, SVMs are the first example of a Convex Optimization method.  To the advanced practitioner, SVMs are the starting point to creating powerful Convex Relaxations to hard problems.

Historically, convex optimization was seen as the path to central planing an entire economy.  A great new book, Red Plenty, is “about the scientists who did their genuinely brilliant best to make the dream come true …” [amazon review].

It was a mass delusion over the simplex method, and it is about as crazy as our current fears over Deep Learning and AI.

Convex optimization is pretty useful, as long as we don’t get crazy about it.

The prototypical method convex relaxation is for Transductive Learning and the Transductive SVM (TSVM)

Vapnik proposed the idea of Transduction many years ago;  indeed the VC theory is proven using Transduction.   I would bet that he knew a TSVM could be convexified–although I would need a job at Facebook to verify this.

A good TSVM has been available since 2001 in SvmLight,  But SvmLight is not opensource, so most people use SvmLin.

Today there are Transductive variants of Random Forests, Regression, and even Deep Learning.  There was even a recent Kaggle Contest–the Black Box Challenge (and, of course, the Deep Learning method won).  Indeed, Deep Learning classifiers may benefit greatly from Transductive/SemiSupervised pretraining with methods like Pseudo Label [8], as shown in the Kaggle The National Data Science Bowl contest.

We mostly care about binary text classification, although there is plenty of research in convex relaxation for multiclass transduction, computer vision, etc.

Transductive learning is essentially like running a SVM, but having to guess a lot of the labels.  The optimization problem is

$\min_{\mathbf{y}\in\mathcal{B}}\min_{f}\,\,\Omega(f)+\sum\limits_{i=1}^{N}\lambda\mathcal{L}(\mathbf{x}_{i},y_{i})$

where $\mathcal{L}$ is the loss function , $\Omega$ the regularization function,

and the binary labels $\mathbf{y_i}\in\mathcal{B}\mid y_{i}\in\left\{1,\,-1,\,unk\,\right\}$ are only partially known.

The optimization is a non-convex, mixed-integer problem.  Amazingly, we can reformulate the TSVM to obtain a convex optimization!

This is called a Convex Relaxation, and it lets us guess the unknown labels…

to within a good approximation, and using some prior knowledge.

Proving an approximation is truly convex is pretty hard stuff, but the basic idea is very simple.  We just want to find a convex approximation to a non-convex function.

It has been known for a while that the TSVM problem can be convexified [5].  But it has been computationally intractable and there is no widely available code.

We examine a new Convex Relaxation of the Transductive Learning called the Weakly Labeled SVM (WellSVM) [2,3].

In the Transductive SVM (TSVM) approach, one selects solutions with minimum SVM Loss (or Slack)

$\mathcal{L}(\mathbf{x}_{i},y_{i})=\xi_{i}$

and the maximum margin

$\Omega=\dfrac{1}{2}\Vert\mathbf{w}\Vert^{2}$

The SVM Dual Problem

Let us consider the standard SVM optimization

$\underset{\mathbf{w,\xi}}{\min}\,\,\dfrac{1}{2}\parallel\mathbf{w}\parallel^{2}+\lambda\sum_{i=1}^{N}\xi_{i}$

which has the dual form

$\underset{\alpha}{\max}\,\,\alpha^{\dagger}\mathbf{1}-\dfrac{1}{2}\mathbf{y^{\dagger}}\boldsymbol\alpha^{\dagger}X\mathbf{X}\boldsymbol\alpha\mathbf{y}\quad,\alpha_{i}>0$

To keep the notation simpler, we will not consider the Kernalized form of the algorithm.  Besides, we are mostly interested in text classification, and Kernels are not needed.

Balancing the Labels

In a TSVM, we have to guess the labels $y_{i}\in\left\{1,\,-1\right\}$ and select the best solution (for a given set of regularization parameters).  There are way too many labes to guess, so

we need to constrain the label configurations by balancing the guesses

We assume that we have some idea of the total fraction of positive (+) labels $\mathcal{B}_{+}$

• Perhaps we can sample them?
• Perhaps we have some external source of information.
• Perhaps we can estimate it.

This is, however, a critical piece of prior information we need. We define the space of possible labels as

$\left\{ \mathbf{y}\vert\sum_{i=1}^{N}y_{i}=\beta\right\}$

i.e, for exactly half positive / negative labels, then

$\mathcal{B}_{+}=\dfrac{1}{2}$

and the true mean label value is zero

$y_{avg}=\bar{y}=0$

So we are saying that if we know

1. the true fraction $\mathcal{B}_{+}$ of (+) labels
2. some small set of the true labels (i.e. < 5%)
3. the features (i.e. bag-of-words for text classification)

Then we know almost all the labels exactly!  And that is powerful.

Note:  iI is critical that in any transductive method, we reduce the size of the label configuration space.  The Balancing constraint is the standard constraint–but it may be hard to implement in practice.  To me, personally, the problem resembles the matrix product states method in quantum many body theory. And I suspect that, eventually, there will be a related method found for transducitive learning.  Perhaps even Deep Learning has found this.

Convex Methods

The popular SvmLin method use a kind of Transductive Meta-Heuristics that set the standard for other approaches.  The problem is, we never really know if we have the best solution.  And it is not easy to extend to multiclass classification.

Convex methods have been the method of choice since Dantzig popularized the simplex method in 1950

Although linear programming itself was actually invented in 1939 by Leonid Kantorovich [7] — “the only Soviet scholar ever to win the Nobel Prize for Economics”

A convex method lends itself to production code that anyone can run.

The WellSVM Convex Relaxation

More generally, we need to solve a non-convex min-max problem of the form

$\underset{\mathbf{y\in\mathcal{B}}}{\min}\,\,\underset{\alpha\in\mathcal{A}}{\max}\, \,G(\mathbf{y,\alpha})$

where the G matrix is

$G(\mathbf{y,\alpha})=\alpha^{\dagger}\mathbf{1}-\dfrac{1}{2}\mathbf{y^{\dagger}}\boldsymbol\alpha^{\dagger}\mathbf{X^{\dagger}X}\boldsymbol\alpha\mathbf{y}$

where α lies in the convex set

$\mathcal{A}=\left\{\boldsymbol\alpha\mid C\mathbf{1}\geq\alpha\geq 0\right\}$

Notice that G is concave in α and (can be made) linear in the y’s [3].   We seek a convex relaxation of this min-max problem.  And, as importantly, we want to code the final problem using an off-the-shelf SVM solver with some simple mods.   The steps are

1 .  Apply the Minimax Theorem

For details, see [4,5], although it was originally posed by John von Neumann in his work on Game Theory.

One could spend a lifetime stuyding von Neumann’s contributions.  Quantum mechanics.  Nuclear Physics.  Etc.

Here we scratch the surface.

The Minimax thereom lets us switch the order of the min/max bounds.

$\underset{\mathbf{y\in\mathcal{B}}}{\min}\,\,\underset{\alpha\in\mathcal{A}}{\max}\, \,G(\mathbf{y,\alpha})\rightarrow\underset{\alpha\in\mathcal{A}}{\max}\,\,\underset{\mathbf{y\in\mathcal{B}}}{\min}\, \,G(\mathbf{y},\alpha)$

The original problem is an upper bound to this.  That is

$\underset{\mathbf{y\in\mathcal{B}}}{\min}\,\,\underset{\alpha\in\mathcal{A}}{\max}\, \,G(\mathbf{y,\alpha}) \geqslant\,(upper bound)\,\underset{\alpha\in\mathcal{A}}{\max}\,\,\underset{\mathbf{y\in\mathcal{B}}}{\min}\, \,G(\mathbf{y},\alpha)$

To solve this, we

2. dualize the inner minimization (in the space of allowable labels)

We convert the search over possible label configurations into the dual max problem, so that the label configurations become constraints.

$\underset{\alpha\in\mathcal{A}}{\max}\,\,\underset{\mathbf{y\in\mathcal{B}}}{\min}\, \,G(\mathbf{y},\alpha)=\underset{\alpha\in\mathcal{A}}{\max}\,\left\{ \underset{\mathbf{\theta}}{\max}\,\,G(\mathbf{y_{t}},\alpha)\geq\theta\vert\mathbf{y_{t}}\in\mathcal{B}\right\}$

This linear in α and θ.  In fact, it is convex.

There are an exponential number of constraints in $\mathcal{B}$.

Even though it is convex, we can not solve this exactly practice.  And that’s…ok.

Not all possible labelings matter, so not all of these constraints are active (necessary) for an optimal solution.

We just need an active subset, $\left\{\mathbf{y_{t}}\in\mathcal{C}\right\}$, which we can find by …

The Cutting Plane, or Gomory Chvatal, method.  (Remember, if you want to sound cool, give your method a Russian name).

To proceed, we construct the Lagrangian and then solve the convex dual problem.

You may remember the method of Lagrange multipliers from freshman calculus:

3. We introduce Lagrange Multipliers for each label configuration

The Lagrangian is

$\theta+\underset{\boldsymbol\mu,\mathbf{y_{t}}\in\mathcal{B}}{\sum}\mu_{t}(G(\mathbf{y_{t}},\alpha)-\theta)$

When we set the derivative w.r.t. $\theta$ to 0, we find

$\sum\boldsymbol\mu_{t}=1$

This lets us rewrite the TSVM as

$\underset{\alpha\in\mathcal{A}}{\max}\,\,\underset{\mathbf{\mu}\in\mathcal{M}}{min}\,\,\underset{\boldsymbol\mu,\mathbf{y_{t}}\in\mathcal{B}}{\sum}\mu_{t}G(\mathbf{y_{t}},\alpha)$

where the set of allowable multiplers $\boldsymbol\mu$ lies in the simplex $\mathcal{M}$

$\mathcal{M}=\left\{\boldsymbol\mu\mid\sum\mu_{t}=1\,\,,\mu_{t}\geq 0\right\}$

The resulting optimization is convex in µ and concave in α.

This is critical as it makes a Kernel-TSVM a Multiple Kernel Learning (MKL) method.

“WellSVM maximizes the margin by generating the most violated label vectors iteratively, and then combines them via efficient multiple kernel learning techniques”.

For linear applications, we need only consider 1 set of $\boldsymbol\mu$.

Now replace the inner optimization subproblem with its dual.  Of course, the dual problem is a lower bound on the optimal value. We then

4. switch the order of the min/max bounds back

to obtain a new min-max optimization–a convex relaxation of the original

$\underset{\boldsymbol\mu\in\mathcal{M}}{min}\,\,\underset{\alpha\in\mathcal{A}}{\max}\,\,\underset{\boldsymbol\mu,\mathbf{y_{t}}\in\mathcal{B}}{\sum}\mu_{t}G(\mathbf{y_{t}},\alpha)$

When we restrict the label configurations to  the working set $\left\{\mathbf{y_{t}}\in\mathcal{C}\right\}$,we have

$\underset{\boldsymbol\mu\in\mathcal{M}}{min}\,\,\underset{\alpha\in\mathcal{A}}{\max}\,\,\underset{\boldsymbol\mu,\mathbf{y_{t}}\in\mathcal{C}}{\sum}\mu_{t}G(\mathbf{y_{t}},\alpha)$

Implementation Details

A key insight of WellSVM is that the core label search

$\underset{\mathbf{\hat{y}}\in\mathcal{C}}{\min}\,\,G(\mathbf{\hat{y}},\alpha)$

is equivalent to

$\underset{\mathbf{\hat{y}}\in\mathcal{C}}{\max}\,\,\mathbf{\hat{y}^{\dagger}\boldsymbol\alpha^{\dagger}X^{\dagger}X\boldsymbol\alpha\hat{y}}$

To me, this is very elegant!

We search the convex hull of the document-document density matrix, weighted by the Langrange multipliers for the labels.

How can we solve a SVM with exponential constraints?   Take a page from old-school Joachim’s Structural SVMs [6].

Cutting Plane Algorithm for WellSVM

This is the goto-method for all Mixed Integer Linear Programming (MILP) problems. On each iteration, we

• obtain the Lagrange Multipliers $\boldsymbol\alpha$ with an off-the-shelf SVM solver
• find a violating constraint (label configuration) $\mathbf{\hat{y}}$

This grows the active set of label configurations $\mathcal{C}$, learning from previous guesses.  We expect $latex N_{\mathcal{C}}\ll N_{\mathcal{B}}$

the PseudoCode is

1.  Initialize $\mathbf{\hat{y}}$ and $\mathcal{C}=\emptyset$
2.  repeat
3.    Update  $\mathcal{C}\leftarrow\mathbf{\hat{y}}\cap\mathcal{C}$
4.    Obtain the optimal α from a dual SVM solver
5.    Generate a violated $\mathbf{\hat{y}}$
6.  until $G(\boldsymbol\alpha,\mathbf{\hat{y}})>\underset{\mathbf{y}\in\mathcal{C}}{min}G(\boldsymbol\alpha,\mathbf{y})-\epsilon$
Finding Violated Constraints

The cutting plane algo finds a new constraint, or cuts, on each iteration, to ‘chip away’ at a problem until the inner convex hull is found.   It usually finds the most violated constraint on each iteration, however,

With SVMs we can get good results just finding any violated constraint.

Here, we seek $\mathbf{y*}$, a violated label assignment.  The WellSVM paper [2,3] provides a great solution.

For any violation $\mathbf{y*}$, and for any pair of label configurations $\mathbf{\bar{y},y}$, we have

$\mathbf{y^{\dagger}Hy*}\neq\mathbf{y^{\dagger}H\bar{y}}$

where $\mathbf{H}=\boldsymbol\alpha^{\dagger}\mathbf{X^{\dagger}X}\boldsymbol\alpha$

This lets us compute $\mathbf{y*}$ in two steps:

First, compute $\mathbf{\bar{y}}$, by searching the current,  active, and usually small set $\mathcal{C}$

$\mathbf{\bar{y}}=\arg\max_{\mathbf{y}\in\mathcal{C}}\,\mathbf{y^{\dagger}Hy}$

Second, compute $\mathbf{y*}$, by searching the set of all balanced label configurations $\mathcal{B}$

$\mathbf{y*}=\arg\max_{\mathbf{y}\in\mathcal{B}}\,\mathbf{y^{\dagger}H\bar{y}}$

Original Matlab Code

We will review the existing code and run some simulations soon

Python Implementation

The goal is to modify scikit learn interface to liblinear to output the Lagrange multipliers $\boldsymbol\alpha$.  then to implement WellSVM in an ipython notebook.  stay tuned.

References

[2] Convex and Scalable Weakly Labeled SVMs , JMLR (2013)

[3] Learning from Weakly Labeled Data (2013)

[4] Kim and Boyd, A Minimax Theorem with Applications to Machine Learning, Signal Processing, and Finance, 2007

[7] “This is an example of Stigler’s law of eponymy, which says that “no scientific discovery is named after its original discoverer.” Stigler’s law of eponymy is itself an example of Stigler’s law of eponymy, since Robert Merton formulated similar ideas much earlier  Quote from Joe Blitzstein on Quora