Sciweavers

AAAI
2015

Learning to Describe Video with Weak Supervision by Exploiting Negative Sentential Information

8 years 8 months ago
Learning to Describe Video with Weak Supervision by Exploiting Negative Sentential Information
Most previous work on video description trains individual parts of speech independently. It is more appealing from a linguistic point of view, for word models for all parts of speech to be learned simultaneously from whole sentences, a hypothesis suggested by some linguists for child language acquisition. In this paper, we learn to describe video by discriminatively training positive sentential labels against negative ones in a weakly supervised fashion: the meaning representations (i.e., HMMs) of individual words in these labels are learned from whole sentences without any correspondence annotation of what those words denote in the video. Textual descriptions are then generated for new video using trained word models.
Haonan Yu, Jeffrey Mark Siskind
Added 27 Mar 2016
Updated 27 Mar 2016
Type Journal
Year 2015
Where AAAI
Authors Haonan Yu, Jeffrey Mark Siskind
Comments (0)