Image artifacts that result from sensor dust are a common but annoying problem for many photographers. To reduce the appearance of dust in an image, we first formulate a model of artifact formation due to sensor dust. With this artifact formation model, we make use of contextual information in the image and a color consistency constraint on dust to remove these artifacts. When multiple images are available from the same camera, even under different camera settings, this approach can also be used to reliably detect dust regions on the sensor. In contrast to image inpainting or other hole-filling methods, the proposed technique utilizes image information within a dust region to guide the use of contextual data. Joint use of these multiple cues leads to image recovery results that are not only visually pleasing, but also faithful to the actual scene. The effectiveness of this method is demonstrated in experiments with various cameras.