In this paper, we present work from our ongoing project on vision-guided retrieval and insertion of ORUs. Guidance is to be provided through estimated relative poses between an ORU (to be retrievedinserted), a robotic arm and the related worksite. The major challengesof this work include objects with highly reflective or mirror surfaces moving with cluttered background, along with unreliable or unavailable camera calibration. Moving edge detection and model-based feature matching and tracking are proposed to deal with those challenges. The relationship between image and model features is used to estimate projective matrices, which are then used to predict feature locations in later images. The effectiveness of the proposed techniques is illustrated by encouraging results.