Sciweavers

3202 search results - page 441 / 641
» Memory access scheduling
Sort
View
MICRO
2010
IEEE
270views Hardware» more  MICRO 2010»
13 years 6 months ago
Many-Thread Aware Prefetching Mechanisms for GPGPU Applications
Abstract-- We consider the problem of how to improve memory latency tolerance in massively multithreaded GPGPUs when the thread-level parallelism of an application is not sufficien...
Jaekyu Lee, Nagesh B. Lakshminarayana, Hyesoon Kim...
TVCG
2010
165views more  TVCG 2010»
13 years 2 months ago
Binary Mesh Partitioning for Cache-Efficient Visualization
Abstract--One important bottleneck when visualizing large data sets is the data transfer between processor and memory. Cacheaware (CA) and cache-oblivious (CO) algorithms take into...
Marc Tchiboukdjian, Vincent Danjean, Bruno Raffin
JOCN
2011
141views more  JOCN 2011»
12 years 11 months ago
Changes in Events Alter How People Remember Recent Information
■ Observers spontaneously segment larger activities into smaller events. For example, “washing a car” might be segmented into “scrubbing,” “rinsing,” and “drying...
Khena M. Swallow, Deanna M. Barch, Denise Head, Co...
ISLPED
2004
ACM
137views Hardware» more  ISLPED 2004»
14 years 1 months ago
Location cache: a low-power L2 cache system
While set-associative caches incur fewer misses than directmapped caches, they typically have slower hit times and higher power consumption, when multiple tag and data banks are p...
Rui Min, Wen-Ben Jone, Yiming Hu
IWSOC
2003
IEEE
104views Hardware» more  IWSOC 2003»
14 years 1 months ago
Incorporating Pattern Prediction Technique for Energy Efficient Filter Cache Design
: - A filter cache is proposed at a higher level than the L1 (main) cache in the memory hierarchy and is much smaller. The typical size of filter cache is of the order of 512 Bytes...
Kugan Vivekanandarajah, Thambipillai Srikanthan, S...