Sciweavers

315 search results - page 17 / 63
» On reducing load store latencies of cache accesses
Sort
View
MICRO
2000
IEEE
122views Hardware» more  MICRO 2000»
13 years 12 months ago
Dynamic zero compression for cache energy reduction
Dynamic Zero Compression reduces the energy required for cache accesses by only writing and reading a single bit for every zero-valued byte. This energy-conscious compression is i...
Luis Villa, Michael Zhang, Krste Asanovic
HPCA
1996
IEEE
13 years 11 months ago
Predictive Sequential Associative Cache
In this paper, we propose a cache design that provides the same miss rate as a two-way set associative cache, but with a access time closer to a direct-mapped cache. As with other...
Brad Calder, Dirk Grunwald, Joel S. Emer
OSDI
2008
ACM
14 years 7 months ago
Everest: Scaling Down Peak Loads Through I/O Off-Loading
Bursts in data center workloads are a real problem for storage subsystems. Data volumes can experience peak I/O request rates that are over an order of magnitude higher than avera...
Dushyanth Narayanan, Austin Donnelly, Eno Thereska...
MASCOTS
2004
13 years 9 months ago
Predicting When Not to Predict
File prefetching based on previous file access patterns has been shown to be an effective means of reducing file system latency by implicitly loading caches with files that are li...
Karl Brandt, Darrell D. E. Long, Ahmed Amer
PARA
2004
Springer
14 years 29 days ago
Cache Optimizations for Iterative Numerical Codes Aware of Hardware Prefetching
Cache optimizations typically include code transformations to increase the locality of memory accesses. An orthogonal approach is to enable for latency hiding by introducing prefet...
Josef Weidendorfer, Carsten Trinitis