A common approach in machine learning is to use a large amount of labeled data to train a model. Usually this model can then only be used to classify data in the same feature spac...
We show that it is possible to use data compression on independently obtained hypotheses from various tasks to algorithmically provide guarantees that the tasks are sufficiently r...
We extend our recent work on relevant subtask learning, a new variant of multitask learning where the goal is to learn a good classifier for a task-of-interest with too few train...
Labeling image collections is a tedious task, especially
when multiple labels have to be chosen for each image. In
this paper we introduce a new framework that extends state
of ...
Nicolas Loeff, Ali Farhadi, Ian Endres and David A...
Related objects may look similar at low-resolutions; differences begin to emerge naturally as the resolution is increased. By learning across multiple resolutions of input, knowle...