The technological advance of sensors is producing an exponential size growth of the data coming from 3D scanning and digital photography. The production of digital 3D models consisting of tens or even hundreds of millions of triangles is quite easy nowadays; at the same time, using high-resolution digital cameras it is also straightforward to produce a set of pictures of the same real object totalling more than 50M Pixel. The problem is how to manage all this data to produce 3D models that could fit the interactive rendering constraints. A common approach is to go for mesh parametrization and texture synthesis, but finding a parametrization for such large meshes and managing such large textures can be prohibitive. Moreover, digital photo sampling produces highly redundant data; this redundancy should be eliminated while mapping to the 3D model but, at the same time, should also be efficiently used to improve the sampled data coherence and the appearance representation accuracy. In thi...