Sciweavers

1132 search results - page 27 / 227
» Hypermedia and the World Wide Web
Sort
View
ACSC
2002
IEEE
14 years 19 days ago
Reducing Cognitive Overhead on the World Wide Web
HyperScout, a Web application, is an intermediary between a server and a client. It intercepts a page to the client, gathers information on each link, and annotates each link with...
R. J. Witt, S. P. Tyerman
WWW
2003
ACM
14 years 8 months ago
Efficient URL caching for world wide web crawling
Crawling the web is deceptively simple: the basic algorithm is (a) Fetch a page (b) Parse it to extract all linked URLs (c) For all the URLs not seen before, repeat (a)?(c). Howev...
Andrei Z. Broder, Marc Najork, Janet L. Wiener
SIGIR
2003
ACM
14 years 28 days ago
Apoidea: A Decentralized Peer-to-Peer Architecture for Crawling the World Wide Web
This paper describes a decentralized peer-to-peer model for building a Web crawler. Most of the current systems use a centralized client-server model, in which the crawl is done by...
Aameek Singh, Mudhakar Srivatsa, Ling Liu, Todd Mi...
ICDCSW
2000
IEEE
14 years 2 days ago
Inferring Sub-Culture Hierarchies Based on Object Diffusion on the World Wide Web
This paper presents our approach to inferring communities on the Web. It delineates the sub-culture hierarchies based on how individuals get involved in the dispersion of online o...
Ta-gang Chiou, Judith S. Donath
WWW
2001
ACM
14 years 8 months ago
Finding authorities and hubs from link structures on the World Wide Web
Recently, there have been a number of algorithms proposed for analyzing hypertext link structure so as to determine the best "authorities" for a given topic or query. Wh...
Allan Borodin, Gareth O. Roberts, Jeffrey S. Rosen...