We are attacking the memory bottleneck by building a “smart” memory controller that improves effective memory bandwidth, bus utilization, and cache efficiency by letting applications dictate how their data is accessed and cached. This paper describes a Parallel Vector Access unit (PVA), the vector memory subsystem that efficiently “gathers” sparse, strided data structures in parallel on a multibank SDRAM memory. We have validated our PVA design via gate-level simulation, and have evaluated its performance via functional simulation and formal analysis. On unit-stride vectors, PVA performance equals or exceeds that of an SDRAM system optimized for cache line fills. On vectors with larger strides, the PVA is up to 32.8 times faster. Our design is up to 3.3 times faster than a pipelined, serial SDRAM memory system that gathers sparse vector data, and the gathering mechanism is two to five times faster than in other PVAs with similar goals. Our PVA only slightly increases hardw...
Binu K. Mathew, Sally A. McKee, John B. Carter, Al