Complete and accurate video tracking is very difficult to achieve in practice due to long occlusions, traffic clutter, shadows and appearance changes. In this paper, we study the feasibility of event recognition when object tracks are fragmented. By changing the lock score threshold controlling track termination, different levels of track fragmentation are generated. The effect on event recognition is revealed by examining the event model match score as a function of lock score threshold. Using a Dynamic Bayesian Network to model events, it is shown that event recognition actually improves with greater track fragmentation, assuming fragmented tracks for the same object are linked together. The improvement continues up to a point when it is more likely to be offset by other errors such as those caused by frequent object reinitialization. The study is conducted on busy scenes of airplane servicing activities where long tracking gaps occur intermittently.