Most flight software testing at the Jet Propulsion Laboratory relies on the use of hand-produced test scenarios and is executed on systems as similar as possible to actual mission hardware. We report on a flight software development effort incorporating large-scale (biased) randomized testing on commodity desktop hardware. The results show that use of a reference implementation, hardware simulation with fault injection, a testable design, and test minimization enabled a high degree of automation in fault detection and correction. Our experience will be of particular interest to developers working in domains where on-time delivery of software is critical (a strong argument for randomized automated testing) but not at the expense of correctness and reliability (a strong argument for model checking, theorem proving, and other heavyweight techniques). The effort spent in randomized testing can prepare the way for generating more complete confidence using heavyweight techniques.
Alex Groce, Gerard J. Holzmann, Rajeev Joshi