Cooking is a daily activity for many people. However, traditional text recipes are often prohibitively difficult to follow for people with language disorders, such as aphasia. We have developed a multi-modal application that leverages the retained ability of aphasic individuals to recognize image-based representations of objects, providing a presentation format that can be more easily followed than a traditional text recipe. Through a systematic approach to developing a visual language for cooking, and the subsequent case study evaluation of a prototype developed according to this language, we show that a combination of visual instructions and navigational structure can help individuals with relatively large language deficits to cook more independently. Categories & Subjects Descriptors: K.4.2 Computers and Society: Social Issues--Assistive technologies for persons with disabilities.