Wednesday, March 18, 2009

Efficient Approaches to Gaussian Process Classification


This paper introduces three methods of doing classification with GP. The first one is variational method (it seems not applicable to parametric models though). But it looks a little different from the common variational inference methods. It has an explicit expression of the posterior Gaussian process's mean (why do they calculate this? isn't it a fully Bayesian methods? maybe that's why they call it a mean field method? Use the mean for inference of labels? Omg, mode, median or mean?) Their tricky part is solving a non-linear equation: the coefficient of the posterior mean equals the mean calculated with the representation. This finally yields a bound for the marginal likelihood.

The second approach actually simplifies the procedure. However the derivation is not included. At a glance, the nonlinear equation is simplified: We do not need to calculate the inverse of the covariance matrix K any more. The sequential method resembles ADF, not sure though.

No comments: