Training principles for unsupervised learning are often derived from motivations that appear to be independent of supervised learning. In this paper we present a simple unificatio...
We present a unified framework for learning link prediction and edge weight prediction functions in large networks, based on the transformation of a graph's algebraic spectru...
We consider a supervised machine learning scenario where labels are provided by a heterogeneous set of teachers, some of which are mediocre, incompetent, or perhaps even malicious...
An anytime algorithm is capable of returning a response to the given task at essentially any time; typically the quality of the response improves as the time increases. Here, we c...
We propose abc-boost (adaptive base class boost) for multi-class classification and present abc-mart, an implementation of abcboost, based on the multinomial logit model. The key ...
Users of topic modeling methods often have knowledge about the composition of words that should have high or low probability in various topics. We incorporate such domain knowledg...
Dual supervision refers to the general setting of learning from both labeled examples as well as labeled features. Labeled features are naturally available in tasks such as text c...
Vikas Sindhwani, Prem Melville, Richard D. Lawrenc...
We describe a new method for learning the conditional probability distribution of a binary-valued variable from labelled training examples. Our proposed Compositional Noisy-Logica...
We propose an importance weighting framework for actively labeling samples. This technique yields practical yet sound active learning algorithms for general loss functions. Experi...