— Testing of graphical user interfaces is important due to its potential to reveal faults in operation and performance of the system under consideration. Most existing test approaches generate test cases as sequences of events of different length. The cost of the test process depends on the number and total length of those test sequences. One of the problems to be encountered is the determination of the test sequence length. Widely accepted hypothesis is that the longer the test sequences, the higher the chances to detect faults. However, there is no evidence that an increase of the test sequence length really affect the fault detection. This paper introduces a reliability theoretical approach to analyze the problem in the light of real-life case studies. Based on a reliability growth model the expected number of additional faults is predicted that will be detected when increasing the length of test sequences.
Fevzi Belli, Michael Linschulte, Christof J. Budni