Bagging is an ensemble method that uses random resampling of a dataset to construct models. In classification scenarios, the random resampling procedure in bagging induces some classification margin over the dataset. In addition, when perform bagging in different feature subspaces, the resulting classification margins are likely to be diverse. We take into account the diversity of classification margins in feature subspaces for improving the performance of bagging. We first study the average error rate of bagging, convert our task into an optimization problem for determining some weights for feature subspaces, and then assign the weights to the subspaces via a randomized technique in classifier construction. Experimental results demonstrate that our method is able to further improve the classification accuracy of bagging, and also outperforms several other ensemble methods including AdaBoost, random forests and random subspace method.