This paper describes a new model for extracting large-field optical flow patterns to generate distributed representations of neural activation to control complex visual tasks such as 3D egomotion. The neural mechanisms draw upon experimental findings about the response properties and specificities of cells in areas V1, MT and MSTd along the dorsal pathway. Model V1 cells detect local motion estimates. Model MT cells in different pools are suggested to be selective to motion patterns integrating from V1 as well as to velocity gradients. Model MSTd cells considered here integrate MT gradient cells over a much larger spatial neighborhood to generate the observed pattern selectivity for expansion/contraction, rotation and spiral motion, providing the necessary input for spatial navigation mechanisms. Our model also incorporates feedback processing between areas V1-MT and MT-MSTd. We demonstrate that such a re-entry of context-related information helps to disambiguate and stabilize more loc...