We investigate the automatic labelling of “events” from an audio recording of a sports game. We describe a technique that utilises a hierarchy of language models, which are a low-level model of acoustic observations and a high-level model of audio events that occur during a game: these models are integrated using a maximum entropy approach. Our models of the audio events also utilise duration and voicing information as well as spectral content, and we show that further discrimination between events is possible using these features. Results on different tennis games show that the use of these techniques is better than using an approach that does not use modelling of dependencies between frames and events or extra information in the form of duration and voicing.