We present a new algorithm for minimizing a convex loss-function subject to regularization. Our framework applies to numerous problems in machine learning and statistics; notably,...
We introduce a novel active learning algorithm for classification of network data. In this setting, training instances are connected by a set of links to form a network, the label...
We present multi-task structure learning for Gaussian graphical models. We discuss uniqueness and boundedness of the optimal solution of the maximization problem. A block coordina...
Many learning applications are characterized by high dimensions. Usually not all of these dimensions are relevant and some are redundant. There are two main approaches to reduce d...
Approximate MAP inference in graphical models is an important and challenging problem for many domains including computer vision, computational biology and natural language unders...
Deep learning has been successfully applied to perform non-linear embedding. In this paper, we present supervised embedding techniques that use a deep network to collapse classes....
Martin Renqiang Min, Laurens van der Maaten, Zinen...
The principle of maximum entropy provides a powerful framework for statistical models of joint, conditional, and marginal distributions. However, there are many important distribu...
This paper introduces two new methods for label ranking based on a probabilistic model of ranking data, called the Plackett-Luce model. The idea of the first method is to use the ...
We propose a new dimensionality reduction method, the elastic embedding (EE), that optimises an intuitive, nonlinear objective function of the low-dimensional coordinates of the d...
When equipped with kernel functions, online learning algorithms are susceptible to the "curse of kernelization" that causes unbounded growth in the model size. To addres...