The increasing number of spoken dialog systems calls for efficient approaches for their development and testing. Our goal is the minimization of hand-crafted resources to maximize the portability of this evaluation environment across spoken dialog systems and domains. In this paper we discuss the user simulation technique which allows us to learn general user strategies from a new corpus. We present this corpus, the VOICE Awards human-machine dialog corpus, and show how it is used to semi-automatically extract the resources and knowledge bases necessary in spoken dialog systems, e.g., the ASR grammar, the dialog classifier, the templates for generation, etc.