Sciweavers

209 search results - page 10 / 42
» A distributed file system for a wide-area high performance c...
Sort
View
IPPS
2010
IEEE
13 years 6 months ago
BlobSeer: Bringing high throughput under heavy concurrency to Hadoop Map-Reduce applications
Hadoop is a software framework supporting the Map/Reduce programming model. It relies on the Hadoop Distributed File System (HDFS) as its primary storage system. The efficiency of ...
Bogdan Nicolae, Diana Moise, Gabriel Antoniu, Luc ...
EUROSYS
2007
ACM
14 years 20 days ago
Automatic configuration of internet services
Recent research has found that operators frequently misconfigure Internet services, causing various availability and performance problems. In this paper, we propose a software inf...
Wei Zheng, Ricardo Bianchini, Thu D. Nguyen
COMPUTER
2002
103views more  COMPUTER 2002»
13 years 8 months ago
SimpleScalar: An Infrastructure for Computer System Modeling
tail defines the level of abstraction used to implement the model's components. A highly detailed model will faithfully simulate all aspects of machine operation, whether or n...
Todd M. Austin, Eric Larson, Dan Ernst
CCGRID
2009
IEEE
14 years 19 days ago
Failure-Aware Construction and Reconfiguration of Distributed Virtual Machines for High Availability Computing
In large-scale clusters and computational grids, component failures become norms instead of exceptions. Failure occurrence as well as its impact on system performance and operatio...
Song Fu
GRID
2004
Springer
14 years 2 months ago
DIRAC: A Scalable Lightweight Architecture for High Throughput Computing
— DIRAC (Distributed Infrastructure with Remote Agent Control) has been developed by the CERN LHCb physics experiment to facilitate large scale simulation and user analysis tasks...
Andrei Tsaregorodtsev, Vincent Garonne, Ian Stokes...