Several theoretical methods have been developed in the past years to evaluate the generalization ability of a classifier: they provide extremely useful insights on the learning phenomena, but are not as effective in giving good generalization estimates in practice. We focus in this work on the application of the Maximal Discrepancy method to the Support Vector Machine for computing an upper bound of its generalization bias.