This paper presents cooperative prefetching and caching — the use of network-wide global resources (memories, CPUs, and disks) to support prefetching and caching in the presence of hints of future demands. Cooperative prefetching and caching effectively unites disk-latency reduction techniques from three lines of research: prefetching algorithms, cluster-wide memory management, and parallel I/O. When used together, these techniques greatly increase the power of prefetching relative to a conventional (nonglobal-memory) system. We have designed and implemented PGMS, a cooperative prefetching and caching system, under the
Geoffrey M. Voelker, Eric J. Anderson, Tracy Kimbr