In this study I use statistical Natural Language Processing and adapted Controlled Language methods to preprocess individual documents before they are used as source documents for a system which automatically generates MCQ (Multiple Choice Question) test items. The literature observes that Natural Language Generation system evaluation is nontrivial and so the success of the featured methods is evaluated using a process suited to the featured domain. Generated MCQ test items are combined with items that have been created using traditional methods and then a routine is selected by a domain expert. The results provide some evidence to support the incorporation of some of the featured methods into future versions of the software.