Tractable Learning and Inference in High-Treewidth Graphical Models

Loading...
Thumbnail Image

Files

Publication or External Link

Date

2009

Citation

DRUM DOI

Abstract

Probabilistic graphical models, by making conditional independence assumptions, can represent complex joint distributions in a factorized form. However, in large problems graphical models often run into two issues. First, in non-treelike graphs, computational issues frustrate exact inference. There are several approximate inference algorithms that, while often working well, do not obey approximation bounds. Second, traditional learning methods are non-robust with respect to model errors-- if the conditional independence assumptions of the model are violated, poor predictions can result.

This thesis proposes two new methods for learning parameters of graphical models: implicit and procedural fitting. The goal of these methods is to improve the results of running a particular inference algorithm. Implicit fitting views inference as a large nonlinear energy function over predicted marginals. During learning, the parameters are adjusted to place the minima of this function close to the true marginals. Inspired by algorithms like loopy belief propagation, procedural fitting considers inference as a message passing procedure. Parameters are adjusted while learning so that this message-passing process gives the best results. These methods are robust to both model errors and approximate inference because learning is done directly in terms of predictive accuracy.

Notes

Rights