Thursday, February 26, 2009
Optimal Information Processing and Bayes' Theorem
It is really something easy. The input is the likelihood Pr(x | θ) and the prior π(θ) and the output is Pr(θ|D) and Pr(x). So the author proposes to minimize the information (log) or the KL divergence. This would lead to Bayes' theorem.
Well is this justification right?
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment