It is difficult to learn good classifiers when training data is missing attribute values. Conventional techniques for dealing with such omissions, such as mean imputation, generally do not significantly improve the performance of the resulting classifier. We proposed imputation-helped classifiers, which use accurate imputation techniques, such as Bayesian multiple imputation (BMI), predictive mean matching (PMM), and Expectation Maximization (EM), as preprocessors for conventional machine learning algorithms. Our empirical results show that EM-helped and BMI-helped classifiers work effectively when the data is “missing completely at random”, generally improving predictive performance over most of the original machine learned classifiers we investigated.
Xiaoyuan Su, Taghi M. Khoshgoftaar, Russell Greine