Sciweavers

AIMSA
2008
Springer

Dealing with Spoken Requests in a Multimodal Question Answering System

14 years 6 months ago
Dealing with Spoken Requests in a Multimodal Question Answering System
Abstract. This paper reports on experiments performed in the development of the QALL-ME system, a multilingual QA infrastructure capable of handling input requests both in written and spoken form. Our objective is to estimate the impact of dealing with automatically transcribed (i.e. noisy) requests on a specific question interpretation task, namely the extraction of relations from natural language questions. A number of experiments are presented, featuring different combinations of manually and automatically transcribed questions datasets to train and evaluate the system. Results (ranging from 0.624 to 0.634 F-measure in the recogniton of the relations expressed by a question) demonstrate that the impact of noisy data on question interpretation is negligible with all the combinations of training/test data. This shows that the benefits of enabling speech access capabilities, allowing for a more natural humanmachine interaction, outweight the minimal loss in terms of performance.
Roberto Gretter, Milen Kouylekov, Matteo Negri
Added 01 Jun 2010
Updated 01 Jun 2010
Type Conference
Year 2008
Where AIMSA
Authors Roberto Gretter, Milen Kouylekov, Matteo Negri
Comments (0)