In vision-based autonomous spacecraft docking multiple views of scene structure captured with the same camera and scene geometry is available under different lighting conditions. These "multiple-exposure" images must be processed to localize visual features to compute the pose of the target object. This paper describes a robust multi-channel edge detection algorithm that localizes the structure of the target object from the local gradient distribution computed over these multiple-exposure images. This approach reduces the effect of the illumination variation including the effect of shadow edges over the use of a single image. Experiments demonstrate that this approach has a lower false detection rate than the average response of the Canny edge detector applied to the individual images separately.