We present a system for rendering very complex 3D models at interactive rates. We select a subset of the model as preferred viewpoints and partition the space into virtual cells. Each cell contains near geometry, rendered using levels of detail and visibility culling, and far geometry, rendered as a textured depth mesh. Our system automatically balances the screen-space errors resulting from geometric simplification with those from textureddepth-mesh distortion. We describe our prefetching and data management schemes, both crucial for models significantly larger than available system memory. We have successfully used our system to accelerate walkthroughs of a 13 million triangle model
Daniel G. Aliaga, Jonathan D. Cohen, Andrew Wilson