Planning a path to a destination, given a number of options and obstacles, is a common task. We suggest a two-component cognitive model that combines retrieval of knowledge about the environment with search guided by visual perception. In the first component, subsymbolic information, acquired during navigation, aids in the retrieval of declarative information representing possible paths to take. In the second component, visual information directs the search, which in turn creates knowledge for the first component. The model is implemented using the ACT-R cognitive architecture and makes realistic assumptions about memory access and shifts in visual attention. We present simulation results for memory-based high-level navigation in grid and tree structures, and visual navigation in mazes, varying relevant cognitive (retrieval noise and visual finsts) and environmental (maze and path size) parameters. The visual component is evaluated with data from a multi-robot control experiment, where...