Sciweavers

65 search results - page 5 / 13
» Learning approximate preconditions for methods in hierarchic...
Sort
View
ICML
2007
IEEE
14 years 8 months ago
Learning state-action basis functions for hierarchical MDPs
This paper introduces a new approach to actionvalue function approximation by learning basis functions from a spectral decomposition of the state-action manifold. This paper exten...
Sarah Osentoski, Sridhar Mahadevan
PKDD
2010
Springer
164views Data Mining» more  PKDD 2010»
13 years 5 months ago
Efficient Planning in Large POMDPs through Policy Graph Based Factorized Approximations
Partially observable Markov decision processes (POMDPs) are widely used for planning under uncertainty. In many applications, the huge size of the POMDP state space makes straightf...
Joni Pajarinen, Jaakko Peltonen, Ari Hottinen, Mik...
AIPS
2009
13 years 8 months ago
Learning User Plan Preferences Obfuscated by Feasibility Constraints
It has long been recognized that users can have complex preferences on plans. Non-intrusive learning of such preferences by observing the plans executed by the user is an attracti...
Nan Li, William Cushing, Subbarao Kambhampati, Sun...
AIPS
2007
13 years 10 months ago
Discovering Relational Domain Features for Probabilistic Planning
In sequential decision-making problems formulated as Markov decision processes, state-value function approximation using domain features is a critical technique for scaling up the...
Jia-Hong Wu, Robert Givan
ICRA
2003
IEEE
222views Robotics» more  ICRA 2003»
14 years 28 days ago
Path planning using learned constraints and preferences
— In this paper we present a novel method for robot path planning based on learning motion patterns. A motion pattern is defined as the path that results from applying a set of ...
Gregory Dudek, Saul Simhon