Neural-inspired branch predictors achieve very low branch misprediction rates. However, previously proposed implementations have a variety of characteristics that make them challenging to implement in future high-performance processors. In particular, the path-based neural predictor (PBNP) and the piecewise-linear (PWL) predictor require deep pipelining and additional area to support checkpointing for misprediction recovery. The complexity of the PBNP predictor stems from the fact that the path history length, which determines the number of tables and pipeline stages, is equal to the history length, which is typically very long for high accuracy. We propose to decouple the path-history length from the outcome-history length through a new technique called modulo-path history. By allowing a shorter path history, we can implement the PBNP and PWL predictors with significantly fewer tables and pipeline stages while still exploiting a traditional long branch outcome history. Keywords Comput...
Gabriel H. Loh, Daniel A. Jiménez