In this article an autonomous visual perception framework for humanoids is presented. This model-based framework exploits the available knowledge and context acquired during global localization in order to overcome the limitations of pure data driven approaches. The reasoning for perception and the properceptive components are the key elements to solve complex visual assertion queries with a proficient performance. Experimental evaluation with the humanoid robot ARMARIII is presented. Key words: Model-Based Vision, Object Recognition, Humanoids.
David Israel Gonzalez-Aguirre, S. Wieland, Tamim A