The task of tracking an object has been fully studied and many solutions presented before. However, it is a perfect test bed for the study of a novel model using Coupled Chaos Systems. Once an object appears in front of a camera, we demonstrate that the visual input is enough for the self-organization of the torques applied to each of the axes controlling the motion of a simulated eye. No learning or specific coding of the task is needed beforehand, which results in a very fast adaptation to perturbations.