There are examples of robotic systems in which autonomous mobile robots self-assemble into larger connected entities. However, existing systems display little or no autonomous control over the shape of the connected entity thus formed. We describe a novel distributed mechanism that allows autonomous mobile robots to self-assemble into pre-specified patterns. Global patterns are ‘grown’ using locally applicable rules and local visual perception only. In this study, we focus on the low-level navigation and directional self-assembly part of the pattern formation process. We analyse the precision of this mechanism on real robots.