Abstract. After participating in last year's CLEF IP (2009) evaluation benchmark, our scores were rather low. The CLEF IP 2010 PAC task enabled us to correct some experiments and obtain better results, basically using the same techniques (almost the same BM25-category strategy as used last year) and improved strategy builder software, and less computing hardware at our disposal. The results are now comparable with other participants. Similar to last year, no feature extraction techniques have been applied; and queries only used the structural information provided in the XML-format of the patent-documents. Furthermore we participated in the new CLS task, which, although scores were rather low, shows again the flexibility of our approach. The low scores can be explained by the straight-forward method applied searching the patent-document collection using keywords from the topic-patent, and using the IPCR-classifications extracted from the documents as results.
Wouter Alink, Roberto Cornacchia, Arjen P. de Vrie