Sciweavers

201 search results - page 35 / 41
» Solving Concurrent Markov Decision Processes
Sort
View
ICML
2007
IEEE
14 years 8 months ago
Multi-task reinforcement learning: a hierarchical Bayesian approach
We consider the problem of multi-task reinforcement learning, where the agent needs to solve a sequence of Markov Decision Processes (MDPs) chosen randomly from a fixed but unknow...
Aaron Wilson, Alan Fern, Soumya Ray, Prasad Tadepa...
ICML
2003
IEEE
14 years 8 months ago
Planning in the Presence of Cost Functions Controlled by an Adversary
We investigate methods for planning in a Markov Decision Process where the cost function is chosen by an adversary after we fix our policy. As a running example, we consider a rob...
H. Brendan McMahan, Geoffrey J. Gordon, Avrim Blum
ICC
2008
IEEE
109views Communications» more  ICC 2008»
14 years 1 months ago
An MDP-Based Approach for Multipath Data Transmission over Wireless Networks
—Maintaining performance and reliability in wireless networks is a challenging task due to the nature of wireless channels. Multipath data transmission has been used in wired sce...
Vinh Bui, Weiping Zhu, Alessio Botta, Antonio Pesc...
ICRA
2008
IEEE
173views Robotics» more  ICRA 2008»
14 years 1 months ago
Bayesian reinforcement learning in continuous POMDPs with application to robot navigation
— We consider the problem of optimal control in continuous and partially observable environments when the parameters of the model are not known exactly. Partially Observable Mark...
Stéphane Ross, Brahim Chaib-draa, Joelle Pi...
ICRA
2008
IEEE
128views Robotics» more  ICRA 2008»
14 years 1 months ago
A point-based POMDP planner for target tracking
— Target tracking has two variants that are often studied independently with different approaches: target searching requires a robot to find a target initially not visible, and ...
David Hsu, Wee Sun Lee, Nan Rong