We present a new approach to estimating mixture models based on a new inference principle we have proposed: the latent maximum entropy principle (LME). LME is different both from ...
The majority of theoretical work in machine learning is done under the assumption of exchangeability: essentially, it is assumed that the examples are generated from the same prob...
Vladimir Vovk, Ilia Nouretdinov, Alexander Gammerm...
We present a fast iterative support vector training algorithm for a large variety of different formulations. It works by incrementally changing a candidate support vector set usin...
S. V. N. Vishwanathan, Alex J. Smola, M. Narasimha...
Theoretical and experimental analyses of bagging indicate that it is primarily a variance reduction technique. This suggests that bagging should be applied to learning algorithms ...
This paper addresses the problem of classification in situations where the data distribution is not homogeneous: Data instances might come from different locations or times, and t...
As text corpora become larger, tradeoffs between speed and accuracy become critical: slow but accurate methods may not complete in a practical amount of time. In order to make the...
Lawrence Shih, Jason D. Rennie, Yu-Han Chang, Davi...
Feature selection, as a preprocessing step to machine learning, has been effective in reducing dimensionality, removing irrelevant data, increasing learning accuracy, and improvin...
Learning in many multi-agent settings is inherently repeated play. This calls into question the naive application of single play Nash equilibria in multi-agent learning and sugges...
We study the common problem of approximating a target matrix with a matrix of lower rank. We provide a simple and efficient (EM) algorithm for solving weighted low-rank approximat...