by Wei Zhang, Xiangyang Xue, Zichen Sun, Yue-Fei Guo and Hong Lu
This is an ICML 2007 paper. After reading it, I found nothing really astonishing. However it gives another direction of research. The idea is based on one of the papers I read long ago, LPP. The technique of LPP is based on Laplacian Eigenmap by finding a linear transform that minimize a similar object function as Laplacian Eigenmap. However, they are both unsupervised learning techniques.
This paper however, when the Laplacian matrix is constructed, supervised information is incorporated by setting a positive value for samples of the same class, 0 for too far away samples and a negative value for samples of different classes.
Then the so-called optimal projection uses the eigen vectors of negative eigenvalues. Though the idea is simple and direct, it works anyway and the experiment and analysis are both detailed and convincing.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment