Tuesday, January 8, 2008

Semi-supervised Nonlinear Dimensionality Reduction


I guess those guys working on semi-supervised manifold learning are far away from practical applications. In this paper, the so-called SS-LLE and SS-LTSA are so obvious that they are not mentioning. Just consider the original algorithm, since the optimization will result the required coordinates while now some of them are known, the blockwise formulation will immediately show us the semi-supervised problem is an even easier one to solve.


Something a little more difficult is how to make the traditional ISOMAP semi-supervised. Since in LLE and LTSA, the to-be-minimized term has the meaning that the reconstruction error in the corresponding settings should be minimized. However, as a spectral embedding, the optimization in ISOMAP doesn't share a similar meaning. They use an ugly way to incorporate their idea in the case of ISOMAP:

As you can see, A is the Gram matrix obtained after the double-centering step in ISOMAP. With the shifting of the eigen values, now the desired coordinates can be obtained in the terms of M, which is similar to the one in LLE and LTSA. But somehow you can sense their non-sense. Their paper states a similar result, complaining SS-ISOMAP is not as good as the other two.

Maybe their latter part is more interesting, analysing the numerical stability of their algorithms. But somehow it is not a mathematical paper and I don't like their style in machine learning.

No comments: