The idea that internal models of the world might be useful has generally been rejected by embodied AI for the same reasons that led to its rejection by behaviour based robotics. This paper re-examines the issue from historical, biological, and functional perspectives; the view that emerges indicates that internal models are essential for achieving cognition, that their use is widespread in biological systems, and that there are several good but neglected examples of their use within embodied AI. Consideration of the example of a hypothetical autonomous embodied agent that has to execute a complex mission in a dynamic, partially unknown, and hostile environment leads to the conclusion that the necessary cognitive architecture is likely to contain separate but interacting models of the body and of the world. This arrangement is shown to have intriguing parallels with new findings on the infrastructure of consciousness, leading to the speculation that the