One of the biggest obstacle to building effective augmented reality (AR) systems is the lack of accurate sensors that report the location of the user in an environment during arbitrary long periods of movements. In this paper, we present an effective hybrid approach that integrates inertial and vision based technologies. This work is motivated by the need to explicitly take into account the relatively poor accuracy of inertial sensors and thus to define an efficient strategy for the collaborative process between the vision based system and the sensor. The contributions of this papers are threefold: (i) our collaborative strategy fully integrates the sensitivity error of the sensor : the sensitivity is practically studied and is propagated into the collaborative process, especially in the 1