One of the biggest challenges in emotional speech resynthesis is the selection of modification parameters that will make humans perceive a targeted emotion. The best selection method is by using human raters. However, for large evaluation sets this process can be very costly. In this paper, we describe a recognition for synthesis (RFS) system to automatically select a set of possible parameter values that can be used to resynthesize emotional speech. The system, developed with supervised training, consists of synthesis (TD-PSOLA), recognition (neural network) and parameter selection modules. The experimental results show evidence that the parameter sets selected by the RFS system can be successfully used to resynthesize the input neutral speech as angry speech, demonstrating that the RFS system can assist in the human evaluation of emotional speech.