Abstract. This paper presents an approach to the issue of adding structure to recordings of collaborative meetings supported by an audio channel and a shared text editor. The virtual meeting environment used is capable of capturing and broadcasting speech, gestures and editing operations in real-time, so recording results in continuous multimedia data. We describe the implementation of a browser which explores simple linkage patterns between these media to support information retrieval through non-linear browsing, and discuss audio segmentation issues arising from this approach.