Sciweavers

SIGKDD
2010

On cross-validation and stacking: building seemingly predictive models on random data

13 years 7 months ago
On cross-validation and stacking: building seemingly predictive models on random data
A number of times when using cross-validation (CV) while trying to do classification/probability estimation we have observed surprisingly low AUC's on real data with very few positive examples. AUC is the area under the ROC and measures the ranking ability and corresponds to the probability that a positive example receives a higher model score than a negative example. Intuition seems to suggest that no reasonable methodology should ever result in a model with an AUC significantly below 0.5. The focus of this paper is not on the estimator properties of CV (bias/variance/significance), but rather on the properties of the `holdout' predictions based on which the CV performance of a model is calculated. We show that CV creates predictions that have an `inverse' ranking with AUC well below 0.25 using features that were initially entirely unpredictive and models that can only perform monotonic transformations. In the extreme, combining CV with bagging (repeated averaging of o...
Claudia Perlich, Grzegorz Swirszcz
Added 21 May 2011
Updated 21 May 2011
Type Journal
Year 2010
Where SIGKDD
Authors Claudia Perlich, Grzegorz Swirszcz
Comments (0)