We describe several approaches for using prosodic features of speech and audio localization to control interactive applications. This information can be applied to parameter control, as well as to speech disambiguation. We discuss how characteristics of spoken sentences can be exploited in the user interface; for example, by considering the speed with which a sentence is spoken and the presence of extraneous utterances. We also show how coarse audio localization can be used for low-fidelity gesture tracking, by inferring the speaker's head position. Categories and Subject Descriptors: H.5.2 [User Interfaces]: Graphical user interfaces, natural language, voice I/O; I.2.7 [Natural Language Processing]. General Terms: Human Factors.