Friday, July 10, 2009

Curriculum Learning


The so-called curriculum learning states the idea of learning things from simple ones to difficult ones. The gradually hardening tasks help build a classifier with better generalization capacity.

There are evidence from cognitive sciences as well as from machine learning itself. In optimization theory, the famous continuation mathod actually has the same spirit. Another example is the deep belief nets, in which the greedy pretraining can be seen as a simpler task than the succedent fine tuning. The examples provided by the authors comes from simple toy experiments in low-dimensional space (train two Bayesian classifier with or without difficult examples), a shape learning task with neural nets (with or without a switching epoch, at which the training set is switched from simple to difficult samples), an NLP example.

Their claim includes several messages:
  • difficult examples may not be useful;
  • better curriculum might speed up online learning and guid the result to the region where better generalization can be found;
  • the idea might be connected with active learning and boosting.

No comments: