Sciweavers

WWW
2008
ACM

Web graph similarity for anomaly detection (poster)

15 years 3 days ago
Web graph similarity for anomaly detection (poster)
Web graphs are approximate snapshots of the web, created by search engines. Their creation is an error-prone procedure that relies on the availability of Internet nodes and the faultless operation of multiple software and hardware units. Checking the validity of a web graph requires a notion of graph similarity. Web graph similarity helps measure the amount and significance of changes in consecutive web graphs. These measurements validate how well search engines acquire content from the web. In this paper we study five similarity schemes: three of them adapted from existing graph similarity measures and two adapted from well-known document and vector similarity methods. We compare and evaluate all five schemes using a sequence of web graphs for Yahoo! and study if the schemes can identify anomalies that may occur due to hardware or other problems. Categories and Subject Descriptors H.3.3 [Information Search and Retrieval]: Search Process General Terms Algorithms, Design, Experimentati...
Panagiotis Papadimitriou 0002, Ali Dasdan, Hector
Added 21 Nov 2009
Updated 21 Nov 2009
Type Conference
Year 2008
Where WWW
Authors Panagiotis Papadimitriou 0002, Ali Dasdan, Hector Garcia-Molina
Comments (0)