Wednesday, February 18, 2009

Is Learning the n-th Thing Easier than Learning the First?


This is the first paper I read about transfer learning. It does not look that difficult. The idea is simple: the previously learning tasks should benefit the current one. The problem is how we might put the knowledge already known into the current model?

The concept it uses is support sets Xk which is used to train previous discriminative function fk. The author compares several algorithms. Memory based learning algorithms:
  • Nearest neighbor and Shepard's method, we can not incoporate support sets.
  • Learning a new presentation: use a neural network to find a function g, which maps data into another space such that on support sets, the samples from the same class will be near to each other while those from different classes will be far away from each other.
  • Learning a distance function: use a neural network to find a ``distance'' g such that on support sets, samples from the same class will be near to each other (value 1) and those from different class will be far away from each other (value 0). With this distance and memory method, we could do the classification on the to be learned problem.

He also studies the neural networks, the base algorithm is feedforward NN which is train with BP.
  • Learning with Hints: share the weights in all learning problems.
  • Explanation-Based Neural Network Learning: needs read more about it.

In the experiments, the author shows even given a limited number of training data, the transfered learner gain better generalization bounds. This might be the first transfer learning paper.

No comments: