Sciweavers

395 search results - page 55 / 79
» When do we interact multimodally
Sort
View
IJRR
2008
95views more  IJRR 2008»
13 years 8 months ago
Legless Locomotion: A Novel Locomotion Technique for Legged Robots
We present a novel locomotion strategy called legless locomotion that allows a round-bodied legged robot to locomote approximately when it is high-centered. Typically, a high-cent...
Ravi Balasubramanian, Alfred A. Rizzi, Matthew T. ...
IVS
2006
97views more  IVS 2006»
13 years 8 months ago
Human perception of structure in shaded space-filling visualizations
Very early in the object recognition process the human visual system extracts shading information. While shading can enhance the visibility of structures, it can have a negative i...
Pourang Irani, Dean Slonowsky, Peer Shajahan
TCS
2008
13 years 8 months ago
Computational self-assembly
The object of this paper is to appreciate the computational limits inherent in the combinatorics of an applied concurrent (aka agent-based) language . That language is primarily m...
Pierre-Louis Curien, Vincent Danos, Jean Krivine, ...
IPM
2006
108views more  IPM 2006»
13 years 8 months ago
Using searcher simulations to redesign a polyrepresentative implicit feedback interface
Information seeking is traditionally conducted in environments where search results are represented at the user interface by a minimal amount of meta-information such as titles an...
Ryen W. White
CHI
2011
ACM
13 years 7 days ago
YouPivot: improving recall with contextual search
According to cognitive science literature, human memory is predicated on contextual cues (e.g., room, music) in the environment. During recall tasks, we associate information/acti...
Joshua M. Hailpern, Nicholas Jitkoff, Andrew Warr,...