Web crawlers generate significant loads on Web servers, and are difficult to operate. Instead of running crawlers at many “client” sites, we propose a central crawler and Web repository that then multicasts appropriate subsets of the central repository to clients. Loads at Web servers are reduced because a single crawler visits the servers, as opposed to all the client crawlers. In this paper we model and evaluate such a central Web multicast facility. We develop multicast algorithms for the facility, comparing them with ones for “broadcast disks.” We also evaluate performance as several factors, such as object granularity and client batching, are varied.