All questions are implicitly associated with an expected answer type. Unlike previous approaches that require a predefined set of question types, we present a method for dynamically constructing a probability-based answer type model for each different question. Our model evaluates the appropriateness of a potential answer by the probability that it fits into the question contexts. Evaluation is performed against manual and semiautomatic methods using a fixed set of answer labels. Results show our approach to be superior for those questions classified as having a miscellaneous answer type.