We present a system at the junction between Computer Vision and Computer Graphics, to produce a 3-D model of an object as observed in a single image, with a minimum of high-level interaction from a user. The input to our system is a single image. First, the user points, coarsely, at image features (edges) that are subsequently automatically and reproducibly extracted in realtime. The user then performs a high level labeling of the curves (e.g. limb edge, cross-section) and specifies relations between edges (e.g. symmetry, surface or part). NURBS are used as working representation of image edges. The objects described by the user specified, qualitative relationships are then reconstructed either as a set of connected parts modeled as Generalized Cylinders, or as a set of 3-D surfaces for 3-D bilateral symmetric objects. In both cases, the texture is also extracted from the image.
Alexandre R. J. François, Gérard G.