Search engines largely rely on robots (i.e., crawlers or spiders) to collect information from the Web. Such crawling activities can be regulated from the server side by deploying ...
Yang Sun, Ziming Zhuang, Isaac G. Councill, C. Lee...
The usability of access control mechanisms in modern distributed systems has been widely criticized but little studied. In this paper, we carefully examine one such widely deploye...
Subject-specific search facilities on health sites are usually built using manual inclusion and exclusion rules. These can be expensive to maintain and often provide incomplete c...
Thanh Tin Tang, David Hawking, Nick Craswell, Kath...
Abstract. Software systems obey the 80/20 rule: aggressively optimizing a vital few execution paths yields large speedups. However, finding the vital few paths can be difficult, e...
In the Linked Open Data cloud one of the largest data sets, comprising of 2.5 billion triples, is derived from the Life Science domain. Yet this represents a small fraction of the ...