In relation extraction, distant supervision seeks to extract relations between entities from text by using a knowledge base, such as Freebase, as a source of supervision. When a s...
We propose a new family of latent variable models called max-margin min-entropy (m3e) models, which define a distribution over the output and the hidden variables conditioned on ...
Kevin Miller, M. Pawan Kumar, Benjamin Packer, Dan...
The expectation maximization (EM) algorithm is a popular algorithm for parameter estimation in models with hidden variables. However, the algorithm has several non-trivial limitat...
We have recently proposed an EM-style algorithm to optimize log-linear models with hidden variables. In this paper, we use this algorithm to optimize a hidden conditional random ...
Georg Heigold, Stefan Hahn, Patrick Lehnen, Herman...
We show how a preselection of hidden variables can be used to efficiently train generative models with binary hidden variables. The approach is based on Expectation Maximization (...
Variational EM has become a popular technique in probabilistic NLP with hidden variables. Commonly, for computational tractability, we make strong independence assumptions, such a...
In this paper, we present a dependency treebased method for sentiment classification of Japanese and English subjective sentences using conditional random fields with hidden varia...
We extend the Bayesian Information Criterion (BIC), an asymptotic approximation for the marginal likelihood, to Bayesian networks with hidden variables. This approximation can be ...
Learning with hidden variables is a central challenge in probabilistic graphical models that has important implications for many real-life problems. The classical approach is usin...
In recent years, there is a growing interest in learning Bayesian networks with continuous variables. Learning the structure of such networks is a computationally expensive proced...