Recognition of signs in sentences requires a training
set constructed out of signs found in continuous sentences.
Currently, this is done manually, which is a tedious process.
In this work, we consider a framework where the modeler
just provides multiple video sequences of sign language sentences,
constructed to contain the vocabulary of interest.
We learn the models of the recurring signs, automatically.
Specifically, we automatically extract the parts of the signs
that are present in most occurrences of the sign in context.
These parts of the signs that is stable with respect to adjacent
signs, are referred to as signemes. Each video is first
transformed into a multidimensional time series representation,
capturing the motion and shape aspects of the sign.
We then extract signemes from multiple sentences, concurrently,
using Iterated Conditional Modes (ICM). We show
results by learning multiple instances of 10 different signs
from a set of 136 sign language sentences....
Barbara L. Loeding, Sudeep Sarkar, Sunita Nayak