We present a gestural interface for entering text on a mobile device via continuous movements, with control based on feedback from a probabilistic language model. Text is represented by continuous trajectories over a hexagonal tessellation, and entry becomes a manual control task. The language model is used to infer user intentions and provide predictions about future actions, and the local dynamics adapt to reduce effort in entering probable text. This leads to an interface with a stable layout, aiding user learning, but which appropriately supports the user via the probability model. Experimental results demonstrate that the application of this technique reduces variance in gesture trajectories, and is competitive in terms of throughput for mobile devices. This paper provides a practical example of a user interface making uncertainty explicit to the user, and probabilistic feedback from hypothesised goals has general application in many gestural interfaces, and is well-suited to supp...