Real-time tessellation methods offer the ability to upsample 3D surface meshes on the fly during rendering. This upsampling relies on 3 major steps. First, it requires a tessellation kernel which can be implemented on GPU or may be already available as a hardware unit. Second, the surface model defines the positions of the newly inserted vertices - we focus on recent visually smooth models. And third the adaptive sampling tailors the spatially varying distribution of newly inserted vertices on the input surface. We study the last component and introduce a new view-dependent adaptivity metric which builds upon both intrinsic and extrinsic criteria of the input mesh. As a result, we obtain a better vertex distribution around important features in the tessellated mesh. 1