Sciweavers

CVPR
2007
IEEE

Transfer Learning in Sign language

15 years 1 months ago
Transfer Learning in Sign language
We build word models for American Sign Language (ASL) that transfer between different signers and different aspects. This is advantageous because one could use large amounts of labelled avatar data in combination with a smaller amount of labelled human data to spot a large number of words in human data. Transfer learning is possible because we represent blocks of video with novel intermediate discriminative features based on splits of the data. By constructing the same splits in avatar and human data and clustering appropriately, our features are both discriminative and semantically similar: across signers similar features imply similar words. We demonstrate transfer learning in two scenarios: from avatar to a frontally viewed human signer and from an avatar to human signer in a 3/4 view.
Ali Farhadi, David A. Forsyth, Ryan White
Added 12 Oct 2009
Updated 28 Oct 2009
Type Conference
Year 2007
Where CVPR
Authors Ali Farhadi, David A. Forsyth, Ryan White
Comments (0)