Sciweavers

62 search results - page 5 / 13
» Design of a Predictive Filter Cache for Energy Savings in Hi...
Sort
View
ICS
2005
Tsinghua U.
14 years 1 months ago
Reducing latencies of pipelined cache accesses through set prediction
With the increasing performance gap between the processor and the memory, the importance of caches is increasing for high performance processors. However, with reducing feature si...
Aneesh Aggarwal
DAC
2009
ACM
14 years 8 months ago
A DVS-based pipelined reconfigurable instruction memory
Energy consumption is of significant concern in battery operated embedded systems. In the processors of such systems, the instruction cache consumes a significant fraction of the ...
Zhiguo Ge, Tulika Mitra, Weng-Fai Wong
DATE
2003
IEEE
127views Hardware» more  DATE 2003»
14 years 26 days ago
Exploring High Bandwidth Pipelined Cache Architecture for Scaled Technology
In this paper we propose a design technique to pipeline cache memories for high bandwidth applications. With the scaling of technology cache access latencies are multiple clock cy...
Amit Agarwal, Kaushik Roy, T. N. Vijaykumar
LCTRTS
2007
Springer
14 years 1 months ago
Compiler-managed partitioned data caches for low power
Set-associative caches are traditionally managed using hardwarebased lookup and replacement schemes that have high energy overheads. Ideally, the caching strategy should be tailor...
Rajiv A. Ravindran, Michael L. Chu, Scott A. Mahlk...
ISCA
2010
IEEE
247views Hardware» more  ISCA 2010»
13 years 11 months ago
An integrated GPU power and performance model
GPU architectures are increasingly important in the multi-core era due to their high number of parallel processors. Performance optimization for multi-core processors has been a c...
Sunpyo Hong, Hyesoon Kim