The state-of-the-art object detection algorithm learns a binary classifier to differentiate the foreground object from the background. Since the detection algorithm exhaustively scans the input image for object instances by testing the classifier, its computational complexity linearly depends on the image size and, if say orientation and scale are scanned, the number of configurations in orientation and scale. We argue that exhaustive scanning is unnecessary when detecting medical anatomy because a medical image offers strong contextual information. We then present an approach to effectively leveraging the medical context, leading to a solution that needs only one scan in theory or several sparse scans in practice and only one integral image even when the rotation is considered. The core is to learn a regression function, based on an annotated database, that maps the appearance observed in a scan window to a displacement vector, which measures the difference between the configuration ...