We address the need for robust detection of obstructed human features in complex environments, with a focus on intelligent surgical UIs. In our setup, real-time detection is used to find features without the help of local (spatial or temporal) information. Such a detector is used to validate, correct or reject the output of the visual feature tracking, which is locally more robust, but drifts over time. In Operating Rooms (OR), surgeons' faces and hands are typically obstructed by sterile clothing and tools, making statistical and/or feature-based feature detection approaches ineffective. We propose a new method for head and hands detection that relies on geometric information from disparity maps, locally refined by color processing. We have applied our method to a surgical mock-up scene, as well as to images gathered during real surgery. Running in a realtime, continuous detection loop, our detector successfully found more than 97% of target features, with very few false positive...