A commonly encountered problem when creating 3D models of large real scenes is unnatural color texture fusion. Due to variations in lighting and camera settings (both manual and automatic), captured color texture maps of the same 3D structures can have very different appearances. When fusing multiple texture maps to create larger models, this color variation leads to poor appearance with patchwork color tilings on homogeneous surfaces. This paper extends previous research on pairwise global color correction to multiple overlapping texture map images. The central idea is to estimate a set of blending transformations that minimize the overall global color discrepancy between the texture maps, thus spreading residual color errors, rather than letting them accumulate.
Nobuyuki Bannai, Robert B. Fisher, Alexander Agath