Augmented reality applications often rely on a detailed environment model to support features such as annotation and occlusion. Usually, such a model is constructed offline, which restricts the generality and mobility of the AR experience. In online SLAM approaches, the fidelity of the model stays at the level of landmark feature maps. In this work we introduce a system which constructs a textured geometric model of the user’s environment as it is being explored. First, 3D feature tracks are organized into roughly planar surfaces. Then, image patches in keyframes are assigned to the planes in the scene using stereo analysis. The system runs as a background process and continually updates and improves the model over time. This environment model can then be rendered into new frames to aid in several common but difficult AR tasks such as accurate real-virtual occlusion and annotation placement.