Sciweavers

NIPS
2007

Agreement-Based Learning

14 years 1 months ago
Agreement-Based Learning
The learning of probabilistic models with many hidden variables and nondecomposable dependencies is an important and challenging problem. In contrast to traditional approaches based on approximate inference in a single intractable model, our approach is to train a set of tractable submodels by encouraging them to agree on the hidden variables. This allows us to capture non-decomposable aspects of the data while still maintaining tractability. We propose an objective function for our approach, derive EM-style algorithms for parameter estimation, and demonstrate their effectiveness on three challenging real-world learning tasks.
Percy Liang, Dan Klein, Michael I. Jordan
Added 30 Oct 2010
Updated 30 Oct 2010
Type Conference
Year 2007
Where NIPS
Authors Percy Liang, Dan Klein, Michael I. Jordan
Comments (0)