Question Answering (QA) is a task that deserves more collaboration between Natural Language Processing (NLP) and Knowledge Representation (KR) communities, not only to introduce reasoning when looking for answers or making use of answer type taxonomies and encyclopedic knowledge, but also, as discussed here, for Answer Validation (AV), that is to say, to decide whether the responses of a QA system are correct or not. This was one of the motivations for the first Answer Validation Exercise at CLEF 2006 (AVE 2006). The starting point for the AVE 2006 was the reformulation of the Answer Validation as a Recognizing Textual Entailment (RTE) problem, under the assumption that a hypothesis can be automatically generated instantiating a hypothesis pattern with a QA system answer. The test collections that we developed in seven different languages at AVE 2006 are specially oriented to the development and evaluation of Answer Validation systems. We show in this article the methodology followed ...