Search engines largely rely on robots (i.e., crawlers or spiders) to collect information from the Web. Such crawling activities can be regulated from the server side by deploying ...
Yang Sun, Ziming Zhuang, Isaac G. Councill, C. Lee...
Background: Significance analysis at single gene level may suffer from the limited number of samples and experimental noise that can severely limit the power of the chosen statist...
Mirko Francesconi, Daniel Remondini, Nicola Nerett...
XML repositories are now a widespread means for storing and exchanging information on the Web. As these repositories become increasingly used in dynamic applications such as e-com...
James Bailey, Alexandra Poulovassilis, Peter T. Wo...
The Web is a distributed network of information sources where the individual sources are autonomously created and maintained. Consequently, syntactic and semantic heterogeneity of ...
Provenance management has become increasingly important to support scientific discovery reproducibility, result interpretation, and problem diagnosis in scientific workflow enviro...
Artem Chebotko, Xubo Fei, Cui Lin, Shiyong Lu, Far...