Wednesday, January 23, 2008

Label Propagation Through Linear Neighborhoods


This is a semi-supervised learning algorithm based on graphs. First they use the idea from LLE to determine the weights for the graph (they also use the traditional way for selecting the neighborhood), however the weight has one more constraint. (in LLE the weights are those that best reconstruct the samples with the sum of 1, here additionally they have to be non-negative, which then yields a best convex combination approximation) Second they use the transductive learning idea (very similar to the ICML 06 paper on ranking graphs). The iteral procedure is paraphrased as label propagation:

To prove the convergence of the procedure, it is only necessary to analyze the eigen values of W (which they claim Perron-Frobenius theorems applies here for the non-negative matrix).

The analysis leads to another connection of their LNP with Laplacian-regularized Least Square Regression:

I must verify my intuition today.

To tackle those out-of-sample data, which turns transduction into induction, they use the similar idea to extend LLE, of coz with the additional constraint.

I am not familiar with this transductive result. But I don't think their algorithm will work any better than Laplacian-regularized least square regression. OK, let me check it.

No comments: