by Huan Wang, Shuicheng Yan, Dong Xu, Xiaoou Tang, Thomas Huang
This is a CVPR paper in 2007. I read Yan's CVPR paper in 2005. So I am quite familiar with this direction. Basically I agree with their argument on using trace ratio, esp. for those graph embedding algorithms. It is more sounding by directly maximizing the ratio of between-class scatterness to with-in class compactness. the optimization becomes difficult when trace ratio is the objective instead of the originally used formulations, e.g. iteratively defined optimization, trace formulation and determinant formulation.
The constraints of their proposed optimization ensure orthogonality of the projection. The algorithm, also an approximation in the principal subspace of St, iteratively updates the projection V, such that
V* = arg max tr VT (Sup - λn Sut ) V s.t. VTV = I
The iteration will increase the objective function monotonously and ensures a global optimum (main contribution). For other constraints, such as one in kernelization, an eigen decomposition is first applied.Hmm... another work as an extension to this (vector -> tensor) says a global optimum might not be obtainable, which I might check later.
4 comments:
it cannot achieve a global optimal solution. in fact, they only prove increasing of the objective function value which doesn't imply a global optimum.
http://mmlab.ie.cuhk.edu.hk/~huan/Publications_files/CVPR07_Wang.pdf
well... I didn't check their proof very carefully though. Could you tell me which part is flawed? Thank you very much!
the proof is correct, however, the conclusion not
the paragraph before Sec. 5"monotonically increase" !=
"converge to the global optimum"
I read this paragraph. The global optimum is ensured in the construction of lambda (c.f. the proof of lemma 3, as is pointed out in the paper too). I think it is right.
Post a Comment