Thursday, January 13, 2011

Semi-supervised Classification Using Linear Neighborhood Propagation


The so-called linear neighborhood propagation relies on the idea from LLE. Instead of using the graph weights directly, as in former methods (c.f. Zhou and Zhu's papers, two previous paper), the weights are computed using the idea from LLE. Therefore, in a way Zhou's version is something like diffusion map, Zhu's Laplacian eigenmap while this one LLE. We may find those counterparts in manifold learning.

The procedure to calculate the weights are identical to that of LLE (by minimizing the affine reconstruction error). Then the weights are used to propagate the labels with the same objective function as in semi-supervised LLE (or landmark LLE). In this way, it eliminates the selection for a width for Gaussian kernels.

Well, why not make a LTSA version? Haha...

No comments: