Sciweavers

901 search results - page 21 / 181
» Hiding Communication Latency in Data Parallel Applications
Sort
View
KDD
2009
ACM
198views Data Mining» more  KDD 2009»
14 years 8 months ago
Pervasive parallelism in data mining: dataflow solution to co-clustering large and sparse Netflix data
All Netflix Prize algorithms proposed so far are prohibitively costly for large-scale production systems. In this paper, we describe an efficient dataflow implementation of a coll...
Srivatsava Daruru, Nena M. Marin, Matt Walker, Joy...
CODES
2007
IEEE
14 years 2 months ago
Channel trees: reducing latency by sharing time slots in time-multiplexed networks on chip
Networks on Chip (NoC) have emerged as the design paradigm for scalable System on Chip communication infrastructure. A growing number of applications, often with firm (FRT) or so...
Andreas Hansson, Martijn Coenen, Kees Goossens
HPCN
1998
Springer
13 years 12 months ago
The GRED Graphical Editor for the GRADE Parallel Program Development Environment
In this paper, we describe a graphical editor GRED as part of the integrated programming environment GRADE that is intended to support designing, debugging and performance tuning o...
Péter Kacsuk, Gábor Dózsa, Ti...
GLOBECOM
2006
IEEE
14 years 1 months ago
On the Parallelism of Convolutional Turbo Decoding and Interleaving Interference
— In forward error correction, convolutional turbo codes were introduced to increase error correction capability approaching the Shannon bound. Decoding of these codes, however, ...
Olivier Muller, Amer Baghdadi, Michel Jéz&e...
SIGMETRICS
2012
ACM
247views Hardware» more  SIGMETRICS 2012»
11 years 10 months ago
A scalable architecture for maintaining packet latency measurements
Latency has become an important metric for network monitoring since the emergence of new latency-sensitive applications (e.g., algorithmic trading and high-performance computing)....
Myungjin Lee, Nick G. Duffield, Ramana Rao Kompell...