Abstract—We point out a problem inherent in the optimization scheme of many popular feature selection methods. It follows from the implicit assumption that higher feature selection criterion value always indicates more preferable subset even if the value difference is marginal. This assumption ignores the reliability issues of particular feature preferences, overfitting and feature acquisition cost. We propose an algorithmic extension applicable to many standard feature selection methods allowing better control over feature subset preference. We show experimentally that the proposed mechanism is capable of reducing the size of selected subsets as well as improving classifier generalization. Keywords-feature selection, machine learning, over-fitting, classification, feature weights, weighted features, feature acquisition cost