Sciweavers

247 search results - page 24 / 50
» Multimodal expression in virtual humans
Sort
View
CORR
2010
Springer
110views Education» more  CORR 2010»
13 years 7 months ago
Learning Multi-modal Similarity
In many applications involving multi-media data, the definition of similarity between items is integral to several key tasks, including nearest-neighbor retrieval, classification,...
Brian McFee, Gert R. G. Lanckriet
MM
2010
ACM
163views Multimedia» more  MM 2010»
13 years 10 months ago
Sonify your face: facial expressions for sound generation
We present a novel visual creativity tool that automatically recognizes facial expressions and tracks facial muscle movements in real time to produce sounds. The facial expression...
Roberto Valenti, Alejandro Jaimes, Nicu Sebe
IAT
2006
IEEE
14 years 3 months ago
Engaging in a Conversation with Synthetic Agents along the Virtuality Continuum
Abstract During the last decade research groups as well as a number of commercial software developers have started to deploy embodied conversational characters in the user interfac...
Elisabeth André
MC
2007
149views Computer Science» more  MC 2007»
13 years 11 months ago
Embodied Media and Mixed Reality for Social and Physical Interactive Communication and Entertainment
This talk outlines new facilities within human media spaces supporting embodied interaction between humans, animals, and computation both socially and physically, with the aim of ...
Adrian David Cheok
ACMACE
2007
ACM
14 years 1 months ago
Pinocchio: conducting a virtual symphony orchestra
We present a system that allows users of any skill to conduct a virtual orchestra. Tempo and volume of the orchestra's performance are influenced with a baton. Pinocchio work...
Bernd Bruegge, Christoph Teschner, Peter Lachenmai...