Thursday, December 18, 2008

Graph Laplacian Regularization for Large-Scale Semidefinite Programming


The first time I read of Weinberger is when I was working on manifold learning. At that time, his idea of maximum variance unfolding was quite enlightening. The manifold learning task, in his interpretation, should preserve local distances while maximizing the embedding variance. The formulation of the problem in optimization term is an SDP problem.

This paper further studies the MVU idea, since the SDP can't scale well when the number of samples grows. The so-called Laplacian regularization technique has been quite popular since its proposal in manifold regularization (maybe, I am not sure about the first appearance of it). The idea is that the bottom eigen vectors of the Laplacian matrix correspond to smooth eigen functions over the graph. Therefore it is reasonable to find an approximation in this subspace for the original problem. By re-formulating the original problem we could find one with reduced complexity.

The other fine-tuning techniques is the approximation offers us a good initial value if we try to solve the non-linear optimization problem without SDP. Using CG algorithm, the result could be further improved.

I think I'd better get familiar with SDP ASAP. I am really in urgent need of comprehension of these optimization tools.

No comments: