Sciweavers

45 search results - page 7 / 9
» Boosting in the Limit: Maximizing the Margin of Learned Ense...
Sort
View
MCS
2002
Springer
13 years 7 months ago
Distributed Pasting of Small Votes
Bagging and boosting are two popular ensemble methods that achieve better accuracy than a single classifier. These techniques have limitations on massive datasets, as the size of t...
Nitesh V. Chawla, Lawrence O. Hall, Kevin W. Bowye...
TNN
2010
176views Management» more  TNN 2010»
13 years 2 months ago
Sparse approximation through boosting for learning large scale kernel machines
Abstract--Recently, sparse approximation has become a preferred method for learning large scale kernel machines. This technique attempts to represent the solution with only a subse...
Ping Sun, Xin Yao
CSL
2010
Springer
13 years 7 months ago
Active learning and semi-supervised learning for speech recognition: A unified framework using the global entropy reduction maxi
We propose a unified global entropy reduction maximization (GERM) framework for active learning and semi-supervised learning for speech recognition. Active learning aims to select...
Dong Yu, Balakrishnan Varadarajan, Li Deng, Alex A...
ICASSP
2009
IEEE
14 years 2 months ago
Maximizing global entropy reduction for active learning in speech recognition
We propose a new active learning algorithm to address the problem of selecting a limited subset of utterances for transcribing from a large amount of unlabeled utterances so that ...
Balakrishnan Varadarajan, Dong Yu, Li Deng, Alex A...
ECCV
2010
Springer
14 years 1 months ago
Robust Multi-View Boosting with Priors
Many learning tasks for computer vision problems can be described by multiple views or multiple features. These views can be exploited in order to learn from unlabeled data, a.k.a....