An important task for multiparty meeting understanding is extracting action items. Action items are a set of tasks that are agreed on by the participants for execution after the meeting, with specific due dates and owners. Dialogue acts, the pragmatic function of an utterance, such as question or backchannel, have been reported to be useful for various dialogue understanding tasks. On the other hand, prosodic information, such as pitch, volume, and speech rate, has been reported to be useful for segmenting a dialogue into utterances or detecting questions. In this paper we investigate the use of dialogue act tagging to improve the identification of action item descriptions and prosodic information to improve action item agreements. Our results indicate that dialogue act tagging improves the identification of action item descriptions by 5% over lexical information, and prosodic information helps discriminating backchannels from agreements with 25% absolute improvement over a baselin...