Sciweavers

TASLP
2002

Distributed speech processing in miPad's multimodal user interface

13 years 11 months ago
Distributed speech processing in miPad's multimodal user interface
This paper describes the main components of MiPad (Multimodal Interactive PAD) and especially its distributed speech processing aspects. MiPad is a wireless mobile PDA prototype that enables users to accomplish many common tasks using a multimodal spoken language interface and wireless-data technologies. It fully integrates continuous speech recognition and spoken language understanding, and provides a novel solution for data entry in PDAs or smart phones, often done by pecking with tiny styluses or typing on minuscule keyboards. Our user study indicates that the throughput of MiPad is significantly superior to that of the existing pen-based PDA interface. Acoustic modeling and noise robustness in distributed speech recognition are key components in MiPad's design and implementation. In a typical scenario, the user speaks to the device at a distance so that he or she can see the screen. The built-in microphone thus picks up a lot of background noise, which requires MiPad be noise ...
Li Deng, Kuansan Wang, Alex Acero, Hsiao-Wuen Hon,
Added 23 Dec 2010
Updated 23 Dec 2010
Type Journal
Year 2002
Where TASLP
Authors Li Deng, Kuansan Wang, Alex Acero, Hsiao-Wuen Hon, Jasha Droppo, C. Boulis, Ye-Yi Wang, D. Jacoby, Milind Mahajan, Ciprian Chelba, X. D. Huang
Comments (0)