Sciweavers

IPPS
1998
IEEE

Using the BSP Cost Model to Optimise Parallel Neural Network Training

14 years 4 months ago
Using the BSP Cost Model to Optimise Parallel Neural Network Training
We derive cost formulae for three di erent parallelisation techniques for training supervised networks. These formulae are parameterised by properties of the target computer architecture. It is therefore possible to decide the best match between parallel computer and training technique. One technique, exemplar parallelism, is far superior for almost all parallel computer architectures. Formulae also take into account optimal batch learning as the overall training approach.
R. O. Rogers, David B. Skillicorn
Added 05 Aug 2010
Updated 05 Aug 2010
Type Conference
Year 1998
Where IPPS
Authors R. O. Rogers, David B. Skillicorn
Comments (0)