Statistical machine translation is often faced with the problem of combining training data from many diverse sources into a single translation model which then has to translate se...
Majid Razmara, George Foster, Baskaran Sankaran, A...
Multi-instance (MI) learning is a variant of supervised learning where labeled examples consist of bags (i.e. multi-sets) of feature vectors instead of just a single feature vecto...
We review the history of modeling score distributions, focusing on the mixture of normal-exponential by investigating the theoretical as well as the empirical evidence supporting i...
Background: Automated genotype calling in tetraploid species was until recently not possible, which hampered genetic analysis. Modern genotyping assays often produce two signals, ...
Simplification of mixture models has recently emerged as an important issue in the field of statistical learning. The heavy computational demands of using large order models dro...
In this paper, we present a novel technique for modeling the posterior probability estimates obtained from a neural network directly in the HMM framework using the Dirichlet Mixtu...
Balakrishnan Varadarajan, Garimella S. V. S. Sivar...
Finite mixture model is a powerful tool in many statistical learning problems. In this paper, we propose a general, structure-preserving approach to reduce its model complexity, w...
Within-subject analysis in fMRI essentially addresses two problems, the detection of brain regions eliciting evoked activity and the estimation of the underlying dynamics. In [1, 2...
We propose a more efficient version of the slice sampler for Dirichlet process mixture models described by Walker (2007). This sampler allows the fitting of infinite mixture mod...
Presentation of the exponential families, of the mixtures of such distributions and how to learn it. We then present algorithms to simplify mixture model, using Kullback-Leibler di...