Sciweavers

ECIS
2003

Towards definitive benchmarking of algorithm performance

14 years 1 months ago
Towards definitive benchmarking of algorithm performance
One of the primary methods employed by researchers to judge the merits of new heuristics and algorithms is to run them on accepted benchmark test cases and comparing their performance against the existing approaches. Such test cases can be either generated or pre-defined, and both approaches have their shortcomings. Generated data may be accidentally or deliberately skewed to favor the algorithm being tested, and the exact data is usually unavailable to other researchers; pre-defined benchmarks may become outdated. This paper describes a secure online benchmark facility called the Benchmark Server, which would store and run submitted programs in different languages on standard benchmark test cases for different problems and generate the performance statistics. With carefully chosen and up-to-date test cases, the Benchmark Server could provide researchers with the definitive means to compare their new methods with the best existing methods using the latest data.
Andrew Lim, Wee-Chong Oon, Wenbin Zhu
Added 31 Oct 2010
Updated 31 Oct 2010
Type Conference
Year 2003
Where ECIS
Authors Andrew Lim, Wee-Chong Oon, Wenbin Zhu
Comments (0)