The current main memory (DRAM) access speeds lag far behind CPU speeds. Cache memory, made of static RAM, is being used in today's architectures to bridge this gap. It provides accesslatencies of 2-4 processor cycles, in contrast to main memory which requires 15-25 cycles. Therefore, the performance of the CPU depends upon how well the cachecan be utilized. We show that there are significant benefits in redesigning our traditional query processing algorithms so that they can make better use of the cache. The new algorithms run 8%-200% faster than the traditional ones.
Ambuj Shatdal, Chander Kant, Jeffrey F. Naughton