Friday, January 9, 2009
Fast Sparse Gaussian Process Methods: The Informative Vector Machine
This NIPS paper doesn't cover much as the technical report does. They are exploring a sample selection methods for Gaussian processes. They have studied regression, classification and ordinal regression problems. The idea originates from ADF (looks like EP). It is a kind of greedy algorithm that selects the best d samples that maximizes a certain criterion---differential entropy score.
I am thinking about Nystrom approximation. The two methods have different intention (perhaps?): IVM aims at finding sparse representation (just like what SVM achieves via optimization, IVM is a little brute-force though); Nystrom method is a more general approximation method for all kernel methods. The former is supervised while the latter unsupervised.
But is IVM extensible for other kernel-based classifiers and regressors?
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment