Sciweavers

139 search results - page 20 / 28
» Software Fault Tolerance of Distributed Programs Using Compu...
Sort
View
SC
2009
ACM
14 years 2 months ago
Kepler + Hadoop: a general architecture facilitating data-intensive applications in scientific workflow systems
MapReduce provides a parallel and scalable programming model for data-intensive business and scientific applications. MapReduce and its de facto open source project, called Hadoop...
Jianwu Wang, Daniel Crawl, Ilkay Altintas
APPT
2009
Springer
14 years 2 months ago
Evaluating SPLASH-2 Applications Using MapReduce
MapReduce has been prevalent for running data-parallel applications. By hiding other non-functionality parts such as parallelism, fault tolerance and load balance from programmers,...
Shengkai Zhu, Zhiwei Xiao, Haibo Chen, Rong Chen, ...
KBSE
2005
IEEE
14 years 1 months ago
Empirical evaluation of the tarantula automatic fault-localization technique
The high cost of locating faults in programs has motivated the development of techniques that assist in fault localization by automating part of the process of searching for fault...
James A. Jones, Mary Jean Harrold
PADL
2009
Springer
14 years 8 months ago
Operational Semantics for Declarative Networking
Declarative Networking has been recently promoted as a high-level programming paradigm to more conveniently describe and implement systems that run in a distributed fashion over a ...
Juan A. Navarro, Andrey Rybalchenko
APL
1993
ACM
13 years 11 months ago
The Role of APL and J in High-Performance Computation
Although multicomputers are becoming feasible for solving large problems, they are difficult to program: Extraction of parallelism from scalar languages is possible, but limited....
Robert Bernecky