The goal of the MobileASL project is to increase accessibility by making the mobile telecommunications network available to the signing Deaf community. Video cell phones enable Deaf users to communicate in their native language, American Sign Language (ASL). However, encoding and transmission of real-time video over cell phones is a powerintensive task that can quickly drain the battery. By recognizing activity in the conversational video, we can drop the frame rate during less important segments without significantly harming intelligibility, thus reducing the computational burden. This recognition must take place from video in real-time on a cell phone processor, on users that wear no special clothing. In this work, we quantify the power savings from dropping the frame rate during less important segments of the conversation. We then describe our technique for recognition, which uses simple features we obtain “for free” from the encoder. We take advantage of the conversational as...
Neva Cherniavsky, Richard E. Ladner, Eve A. Riskin