Internet supercomputing is becoming an increasingly popular means for harnessing the power of a vast number of interconnected computers. This comes at a cost substantially lower than acquiring a supercomputer, or building a cluster of powerful machines. However with it come the challenges of marshaling distributed resources and dealing with failures. Traditional centralized approaches to network supercomputing employ a master processor and a large number of worker processors that must execute a collection of tasks on behalf of the master. In such a centralized scheme, the master processor is a performance bottleneck and a single point of failure. Additionally, a phenomenon of increasing concern is that workers may return incorrect results. This may happen due to unintended failures caused, e.g., by over-clocked processors, or workers may claim to have performed assigned work so as to obtain incentives associated with earning a high rank in the system. This paper develops an original a...
Seda Davtyan, Kishori M. Konwar, Alexander A. Shva