Sciweavers

PKDD
2009
Springer

Boosting Active Learning to Optimality: A Tractable Monte-Carlo, Billiard-Based Algorithm

14 years 5 months ago
Boosting Active Learning to Optimality: A Tractable Monte-Carlo, Billiard-Based Algorithm
Abstract. This paper focuses on Active Learning with a limited number of queries; in application domains such as Numerical Engineering, the size of the training set might be limited to a few dozen or hundred examples due to computational constraints. Active Learning under bounded resources is formalized as a finite horizon Reinforcement Learning problem, where the sampling strategy aims at minimizing the expectation of the generalization error. A tractable approximation of the optimal (intractable) policy is presented, the Bandit-based Active Learner (BAAL) algorithm. Viewing Active Learning as a single-player game, BAAL combines UCT, the tree structured multi-armed bandit algorithm proposed by Kocsis and Szepesv´ari (2006), and billiard algorithms. A proof of principle of the approach demonstrates its good empirical convergence toward an optimal policy and its ability to incorporate prior AL criteria. Its hybridization with the Query-by-Committee approach is found to improve on both...
Philippe Rolet, Michèle Sebag, Olivier Teyt
Added 26 Jul 2010
Updated 26 Jul 2010
Type Conference
Year 2009
Where PKDD
Authors Philippe Rolet, Michèle Sebag, Olivier Teytaud
Comments (0)