Sciweavers

ESA
2001
Springer

Strongly Competitive Algorithms for Caching with Pipelined Prefetching

14 years 4 months ago
Strongly Competitive Algorithms for Caching with Pipelined Prefetching
Suppose that a program makes a sequence of m accesses (references) to data blocks, the cache can hold k < m blocks, an access to a block in the cache incurs one time unit, and fetching a missing block incurs d time units. A fetch of a new block can be initiated while a previous fetch is in progress; thus, min{k, d} block fetches can be in progress simultaneously. Any sequence of block references is modeled as a walk on the access graph of the program. The goal is to find a policy for prefetching and caching, which minimizes the overall execution time of a given reference sequence. This study is motivated from the pipelined operation of modern memory controllers, and from program execution on fast processors. In the offline case, we show that an algorithm proposed by Cao et al. [6] is optimal for this problem. In the online case, we give an algorithm that is within factor of 2 from the optimal in the set of online deterministic
Alexander Gaysinsky, Alon Itai, Hadas Shachnai
Added 28 Jul 2010
Updated 28 Jul 2010
Type Conference
Year 2001
Where ESA
Authors Alexander Gaysinsky, Alon Itai, Hadas Shachnai
Comments (0)