Sciweavers

350 search results - page 37 / 70
» Complexity of Planning with Partial Observability
Sort
View
ICML
2008
IEEE
14 years 8 months ago
Reinforcement learning with limited reinforcement: using Bayes risk for active learning in POMDPs
Partially Observable Markov Decision Processes (POMDPs) have succeeded in planning domains that require balancing actions that increase an agent's knowledge and actions that ...
Finale Doshi, Joelle Pineau, Nicholas Roy
AAAI
2008
13 years 10 months ago
Planning for Human-Robot Interaction Using Time-State Aggregated POMDPs
In order to interact successfully in social situations, a robot must be able to observe others' actions and base its own behavior on its beliefs about their intentions. Many ...
Frank Broz, Illah R. Nourbakhsh, Reid G. Simmons
FSR
2007
Springer
135views Robotics» more  FSR 2007»
14 years 2 months ago
State Space Sampling of Feasible Motions for High Performance Mobile Robot Navigation in Highly Constrained Environments
Sampling in the space of controls or actions is a well-established method for ensuring feasible local motion plans. However, as mobile robots advance in performance and competence ...
Thomas M. Howard, Colin J. Green, Alonzo Kelly
ICDT
2003
ACM
139views Database» more  ICDT 2003»
14 years 1 months ago
New Rewritings and Optimizations for Regular Path Queries
All the languages for querying semistructured data and the web use as an integral part regular expressions. Based on practical observations, finding the paths that satisfy those r...
Gösta Grahne, Alex Thomo
JAIR
2006
160views more  JAIR 2006»
13 years 7 months ago
Anytime Point-Based Approximations for Large POMDPs
The Partially Observable Markov Decision Process has long been recognized as a rich framework for real-world planning and control problems, especially in robotics. However exact s...
Joelle Pineau, Geoffrey J. Gordon, Sebastian Thrun