Wearable, camera based, head–tracking systems use spatial image registration algorithms to align images taken as the wearer gazes around their environment. This allows for computer–generated information to appear to the user as though it was anchored in the real world. Often, these algorithms require creation of a multiscale Gaussian pyramid or repetitive re–projection of the images. Such operations, however, can be computationally expensive, and such head–tracking algorithms are desired to run in real–time on a body borne computer. In this paper, we present a method of using the 3D computer graphics hardware that is available in a typical wearable computer to accelerate the repetitive image projections required in many computer vision algorithms. We apply this “graphics for vision” technique to a wearable camera based head–tracking algorithm, implemented on a wearable computer with 3D graphics hardware. We perform an analysis of the acceleration achieved by applying g...