Filter cache has been proposed as an energy saving architectural feature [9]. A filter cache is placed between the CPU and the instruction cache (I-cache) to provide the instruction stream. Energy savings result from accesses to a small cache. There is however loss of performance when instructions are not found in the filter cache. The majority of the energy savings from the filter cache in high performance processors are due to the temporal reuse of instructions in small loops. In this paper, we examine subsequent fetch addresses at run-time to predict whether the next fetch address is in the filter cache. In case a miss is predicted, we reduce miss penalty by accessing the I-cache directly. Experimental results show that our next fetch prediction reduces performance penalty by more than 91% and maintains82% energy-efficiency of a conventional filter cache. Average I-cache energy savings of 31% are achieved by our filter cache design with around 1% performance degradation.
Weiyu Tang, Rajesh K. Gupta, Alexandru Nicolau