— Although AdaBoost has achieved great success, it still suffers from following problems: (1) the training process could be unmanageable when the number of features is extremely large; (2) the same weak classifier may be learned multiple times from a weak classifier pool, which does not provide additional information for updating the model; (3) there is an imbalance between the amount of the positive samples and that of the negative samples for multi-class classification problems. In this paper, we propose a two-stage AdaBoost learning framework to select and fuse the discriminative feature effectively. Moreover, an improved AdaBoost algorithm is developed to select weak classifiers. Instead of boosting in the original feature space, whose dimensionality is usually very high, multiple feature subspaces with lower dimensionality are generated. In the first stage, boosting is carried out in each subspace. Then the trained classifiers are further combined with simple fusion method...
Hongbo Deng, Jianke Zhu, Michael R. Lyu, Irwin Kin