Sciweavers

214 search results - page 11 / 43
» Automatic parallelization for graphics processing units
Sort
View
CSE
2011
IEEE
12 years 7 months ago
Parallel Execution of AES-CTR Algorithm Using Extended Block Size
—Data encryption and decryption are common operations in a network based application programs with security. In order to keep pace with the input data rate in such applications, ...
Nhat-Phuong Tran, Myungho Lee, Sugwon Hong, Seung-...
ICPADS
2010
IEEE
13 years 5 months ago
GMH: A Message Passing Toolkit for GPU Clusters
Driven by the market demand for high-definition 3D graphics, commodity graphics processing units (GPUs) have evolved into highly parallel, multi-threaded, many-core processors, whi...
Jie Chen, William A. Watson III, Weizhen Mao
ICAPR
2005
Springer
14 years 1 months ago
Unsupervised Markovian Segmentation on Graphics Hardware
Abstract. This contribution shows how unsupervised Markovian segmentation techniques can be accelerated when implemented on graphics hardware equipped with a Graphics Processing Un...
Pierre-Marc Jodoin, Jean-François St-Amour,...
3DPVT
2006
IEEE
233views Visualization» more  3DPVT 2006»
14 years 1 months ago
Scanline Optimization for Stereo on Graphics Hardware
In this work we propose a scanline optimization procedure for computational stereo using a linear smoothness cost model performed by programmable graphics hardware. The main idea ...
Christopher Zach, Mario Sormann, Konrad F. Karner
INTENSIVE
2009
IEEE
14 years 2 months ago
Accelerating K-Means on the Graphics Processor via CUDA
In this paper an optimized k-means implementation on the graphics processing unit (GPU) is presented. NVIDIA’s Compute Unified Device Architecture (CUDA), available from the G8...
Mario Zechner, Michael Granitzer