We explore a new Bayesian model for probabilistic grammars, a family of distributions over discrete structures that includes hidden Markov models and probabilistic context-free gr...
The traditional mention-pair model for coreference resolution cannot capture information beyond mention pairs for both learning and testing. To deal with this problem, we present ...
Xiaofeng Yang, Jian Su, Jun Lang, Chew Lim Tan, Ti...
Our goal is to model the way people induce knowledge from rare and sparse data. This paper describes a theoretical framework for inducing knowledge from these incomplete data descr...
We present a generative model for the unsupervised learning of dependency structures. We also describe the multiplicative combination of this dependency model with a model of line...
In this paper we developed an Inductive Logic Programming (ILP) based framework ExOpaque that is able to extract a set of Horn clauses from an arbitrary opaque machine learning mo...