Sciweavers

2100 search results - page 9 / 420
» Observation Can Be as Effective as Action in Problem Solving
Sort
View
AIPS
2009
13 years 8 months ago
Automatic Derivation of Memoryless Policies and Finite-State Controllers Using Classical Planners
Finite-state and memoryless controllers are simple action selection mechanisms widely used in domains such as videogames and mobile robotics. Memoryless controllers stand for func...
Blai Bonet, Héctor Palacios, Hector Geffner
ICML
2006
IEEE
14 years 8 months ago
Probabilistic inference for solving discrete and continuous state Markov Decision Processes
Inference in Markov Decision Processes has recently received interest as a means to infer goals of an observed action, policy recognition, and also as a tool to compute policies. ...
Marc Toussaint, Amos J. Storkey
EDBT
2008
ACM
160views Database» more  EDBT 2008»
14 years 7 months ago
Why go logarithmic if we can go linear?: Towards effective distinct counting of search traffic
Estimating the number of distinct elements in a large multiset has several applications, and hence has attracted active research in the past two decades. Several sampling and sket...
Ahmed Metwally, Divyakant Agrawal, Amr El Abbadi
ICRA
2007
IEEE
154views Robotics» more  ICRA 2007»
14 years 1 months ago
Oracular Partially Observable Markov Decision Processes: A Very Special Case
— We introduce the Oracular Partially Observable Markov Decision Process (OPOMDP), a type of POMDP in which the world produces no observations; instead there is an “oracle,” ...
Nicholas Armstrong-Crews, Manuela M. Veloso
AAAI
2010
13 years 9 months ago
Automatic Derivation of Finite-State Machines for Behavior Control
Finite-state controllers represent an effective action selection mechanisms widely used in domains such as video-games and mobile robotics. In contrast to the policies obtained fr...
Blai Bonet, Héctor Palacios, Hector Geffner