This paper is about our approach to answer validation, which centered by a Recognizing Textual Entailment (RTE) core engine. We first combined the question and the answer into Hypothesis (H) and view the document as Text (T); then, we used our RTE system to check whether the entailment relation holds between them. Our system was evaluated on the Answer Validation Exercise (AVE) task and achieved f-measures of 0.46 and 0.55 for two submission runs, which both outperformed others’ results for the English language.