Tuesday, July 28, 2009

Convolutional Deep Belief Networks for Scalable Unsupervised Learning of Hierarchical Representations


This can be viewed as a deep version of convolutioal neural network (I am not quite familiar with this). In each layer (in a sense), there are two sublayers, one for detection, convolving the input with several filters, the other for max-pooling, shrinking (resize the image) the convolved sublayer. Therefore on a whole the layer's input is an image and the output is several convolved and downsampled images. They layer this kind of nets and get the deep version.

Let's see what we have to get. For each convolving sublayer, we have to train a convolution kernel and the corresponding biases (as in RBM). The change is the max-pooling layer. Each neuron in the max pooling layer only connects to a fixed size (a small patch, e.g. 2x2) of neurons in the convolutional sublayer. Since the neurons in the convolutional sublayer could be 0-1 and the max-pooling means only if none of the input neurons fires the output is 0, from the outside it is like a big neuron which can take multiple values (e.g. 2x2+1 = 5). The we may do as RBM, writing down the energy function, converting it to probability, formulating the likelihood and using CD learning with a sparsity penalty.

The good idea behind the structure is that we first get some useful filters, then parts of the object and later the whole objects. The features learned with the model gives good results on several data sets.

2 comments:

Anonymous said...

nice post. I would love to follow you on twitter.

Anonymous said...

Usually take income developments from your bank card whenever you totally need to. The fund charges for cash developments are really substantial, and very difficult to repay. Only utilize them for situations for which you have no other alternative. But you need to truly truly feel that you are able to make sizeable monthly payments in your bank card, immediately after. [url=http://www.x21w12w21.info]Phila56r[/url]