Sciweavers

PROMISE
2010

On the value of learning from defect dense components for software defect prediction

13 years 7 months ago
On the value of learning from defect dense components for software defect prediction
BACKGROUND: Defect predictors learned from static code measures can isolate code modules with a higher than usual probability of defects. AIMS: To improve those learners by focusing on the defect-rich portions of the training sets. METHOD: Defect data CM1, KC1, MC1, PC1, PC3 was separated into components. A subset of the projects (selected at random) were set aside for testing. Training sets were generated for a NaiveBayes classifier in two ways. In sample the dense treatment, the components with higher than the median number of defective modules were used for training. In the standard treatment, modules from any component were used for training. Both samples were run against the test set and evaluated using recall, probability of false alarm, and precision. In addition, under sampling and over sampling was performed on the defect data. Each method was repeated in a 10-by-10 cross-validation experiment. RESULTS: Prediction models learned from defect dense components out-performed stan...
Hongyu Zhang, Adam Nelson, Tim Menzies
Added 20 May 2011
Updated 20 May 2011
Type Journal
Year 2010
Where PROMISE
Authors Hongyu Zhang, Adam Nelson, Tim Menzies
Comments (0)