12 thoughts on “Why Deep Learning Works III: a preview

  1. Using generative models to approximate energy funnels sounds most natural to me. In the past, my advisor and I were able to show that structure populations (near-optimal solutions, in our case) indeed correlated with the phenomenon of multiple binding modes. This phenomenon, is in turn believed to be related to energy funnels (http://www.ncbi.nlm.nih.gov/pubmed/18058908). My last comp-chem job was about 8 years ago and I may have missed the recent developments in the field. Besides your presentation, have there been other publications that explore generative models in the context of energy funnels and entropy computations?

    Liked by 1 person

  2. The slides are very interesting, spin glasses have crossed my mind too, this question keeps bugging me, I have also background in theoretical solid state physics and machine learning.

    In my understanding the question is :
    -Model (e.g. a ball is jumping up and down in a room) generating data (light reflects from the moving ball) in high dimensional space (camera records it, produces pixels)
    -Machine Learning is an inverse problem, we seek the Model (equations of motions of the ball) that generated the data in the high dimensional space

    This also reminds me of scattering (inverse) problems.


    • I would think that unlike random matrices / spin glasses, these systems are strongly correlated. After all, they are supervised methods.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s