Sciweavers

826 search results - page 67 / 166
» Coupling information retrieval and information extraction: A...
Sort
View
WWW
2009
ACM
14 years 8 months ago
Sitemaps: above and beyond the crawl of duty
Comprehensive coverage of the public web is crucial to web search engines. Search engines use crawlers to retrieve pages and then discover new ones by extracting the pages' o...
Uri Schonfeld, Narayanan Shivakumar
SIGIR
2003
ACM
14 years 1 months ago
Using manually-built web directories for automatic evaluation of known-item retrieval
Information retrieval system evaluation is complicated by the need for manually assessed relevance judgments. Large manually-built directories on the web open the door to new eval...
Steven M. Beitzel, Eric C. Jensen, Abdur Chowdhury...
WWW
2008
ACM
14 years 8 months ago
Exploiting semantic web technologies to model web form interactions
Form mapping is the key problem that needs to be solved in order to get access to the hidden web. Currently available solutions for fully automatic mapping are not ready for comme...
Bernhard Krüpl, Robert Baumgartner, Wolfgang ...
AIL
2006
130views more  AIL 2006»
13 years 7 months ago
Extractive summarisation of legal texts
We describe research carried out as part of a text summarisation project for the legal domain for which we use a new XML corpus of judgments of the UK House of Lords. These judgmen...
Ben Hachey, Claire Grover
WISE
2005
Springer
14 years 1 months ago
Decomposition-Based Optimization of Reload Strategies in the World Wide Web
Web sites, Web pages and the data on pages are available only for specific periods of time and are deleted afterwards from a client’s point of view. An important task in order t...
Dirk Kukulenz