We describe a scalable approach to 3D smooth object retrieval which searches for and localizes all the occurrences of a user outlined object in a dataset of images in real time. The approach is illustrated on sculptures. A smooth object is represented by its material appearance (sufficient for foreground/background segmentation) and imaged shape (using a set of semi-local boundary descriptors). The descriptors are tolerant to scale changes, segmentation failures, and limited viewpoint changes. Furthermore, we show that the descriptors may be vector quantized (into a bag-of-boundaries)giving a representation that is suited to the standard visual word architectures for immediate retrieval of specific objects. We introduce a new dataset of 6K images containing sculptures by Moore and Rodin, and annotated with ground truth for the occurrence of twenty 3D sculptures. It is demonstrated that recognition can proceed successfully despite changes in viewpoint, illumination and partial occlus...