Sciweavers

315 search results - page 29 / 63
» On reducing load store latencies of cache accesses
Sort
View
ICPP
2003
IEEE
14 years 27 days ago
Enabling Partial Cache Line Prefetching Through Data Compression
Hardware prefetching is a simple and effective technique for hiding cache miss latency and thus improving the overall performance. However, it comes with addition of prefetch buff...
Youtao Zhang, Rajiv Gupta
VLDB
2004
ACM
126views Database» more  VLDB 2004»
14 years 29 days ago
STEPS towards Cache-resident Transaction Processing
Online transaction processing (OLTP) is a multibillion dollar industry with high-end database servers employing state-of-the-art processors to maximize performance. Unfortunately,...
Stavros Harizopoulos, Anastassia Ailamaki
ADC
2009
Springer
178views Database» more  ADC 2009»
14 years 2 months ago
CSC: Supporting Queries on Compressed Cached XML
Whenever a client frequently has to retrieve, to query and to locally transform large parts of a huge XML document that is stored on a remote web information server, data exchange...
Stefan Böttcher, Rita Hartel
IPPS
2007
IEEE
14 years 1 months ago
A Power-Aware Prediction-Based Cache Coherence Protocol for Chip Multiprocessors
Snoopy cache coherence protocols broadcast requests to all nodes, reducing the latency of cache to cache transfer misses at the expense of increasing interconnect power. We propos...
Ehsan Atoofian, Amirali Baniasadi
IPPS
2005
IEEE
14 years 1 months ago
Effective Instruction Prefetching via Fetch Prestaging
As technological process shrinks and clock rate increases, instruction caches can no longer be accessed in one cycle. Alternatives are implementing smaller caches (with higher mis...
Ayose Falcón, Alex Ramírez, Mateo Va...