A two-class imbalanced data problem (IDP) emerges when the data from majority class are compactly clustered and the data from minority class are scattered. Though a discriminative binary Support Vector Machine (SVM) can be trained by manually balancing the data, its performance is usually poor due to the inadequate representation of the minority class. A recognition-based one-class SVM can be trained using the data from the well-represented class only. However, it is not highly discriminative. Exploiting the complementary natures of the two types of SVMs in an ensemble can bring benefits from both worlds in addressing the IDP. Experimental results on both artificial and real benchmark data sets support the feasibility of our proposed method.