The paper reviews the TREC Programme up to TREC-6 (1997), considering the test results, the substantive findings for IR that follow, and the lessons TREC offers for IR evaluation. The paper focusses on the adhoc retrieval task, with discussion of other test tracks as appropriate. The paper summarises the structure of the TREC work and analyses the experimental data in some detail. The analysis of the tests is presented through a series of key questions about indexing models, document and query descriptions, search strategies, etc. The assessment confirms that statistically-based methods perform as well as any, and that the nature and treatment of the user's request is by far the dominant factor in performance. One implication is that TREC should move into a new phase targeted on key comparisons and task specifications designed to deliver substantive new information, in particular shifting towards situated IR that addresses the user's context and contribution to searching.