Sciweavers

ECML
2005
Springer

Active Learning for Probability Estimation Using Jensen-Shannon Divergence

14 years 5 months ago
Active Learning for Probability Estimation Using Jensen-Shannon Divergence
Active selection of good training examples is an important approach to reducing data-collection costs in machine learning; however, most existing methods focus on maximizing classification accuracy. In many applications, such as those with unequal misclassification costs, producing good class probability estimates (CPEs) is more important than optimizing classification accuracy. We introduce novel approaches to active learning based on the algorithms BootstrapLV and ACTIVEDECORATE, by using Jensen-Shannon divergence (a similarity measure for probability distributions) to improve sample selection for optimizing CPEs. Comprehensive experimental results demonstrate the benefits of our approaches.
Prem Melville, Stewart M. Yang, Maytal Saar-Tsecha
Added 27 Jun 2010
Updated 27 Jun 2010
Type Conference
Year 2005
Where ECML
Authors Prem Melville, Stewart M. Yang, Maytal Saar-Tsechansky, Raymond J. Mooney
Comments (0)