Action mode interfaces, in which the user achieves his goals by manipulating representations, suffer from some fundamental disadvantages. In this paper, we present a working prototype of a system for Continuous Linguistic Feedback Generation (CLFG), a facility that addresses some of the major disadvantages. CLFG generates natural language descriptions of the actions the user is performing. These descriptions are presented in both the visual and audio channels. The knowledge sources and algorithm that enable CLFG to provide concise and relevant information are described in detail.