Sciweavers

109 search results - page 17 / 22
» Model Checking Markov Reward Models with Impulse Rewards
Sort
View
ICML
1995
IEEE
14 years 11 months ago
Learning Policies for Partially Observable Environments: Scaling Up
Partially observable Markov decision processes (pomdp's) model decision problems in which an agent tries to maximize its reward in the face of limited and/or noisy sensor fee...
Michael L. Littman, Anthony R. Cassandra, Leslie P...
AIPS
2007
14 years 21 days ago
Learning to Plan Using Harmonic Analysis of Diffusion Models
This paper summarizes research on a new emerging framework for learning to plan using the Markov decision process model (MDP). In this paradigm, two approaches to learning to plan...
Sridhar Mahadevan, Sarah Osentoski, Jeffrey Johns,...
ATAL
2004
Springer
14 years 3 months ago
Learning User Preferences for Wireless Services Provisioning
The problem of interest is how to dynamically allocate wireless access services in a competitive market which implements a take-it-or-leave-it allocation mechanism. In this paper ...
George Lee, Steven Bauer, Peyman Faratin, John Wro...
IAT
2005
IEEE
14 years 4 months ago
Decomposing Large-Scale POMDP Via Belief State Analysis
Partially observable Markov decision process (POMDP) is commonly used to model a stochastic environment with unobservable states for supporting optimal decision making. Computing ...
Xin Li, William K. Cheung, Jiming Liu
ATAL
2004
Springer
14 years 3 months ago
Communication for Improving Policy Computation in Distributed POMDPs
Distributed Partially Observable Markov Decision Problems (POMDPs) are emerging as a popular approach for modeling multiagent teamwork where a group of agents work together to joi...
Ranjit Nair, Milind Tambe, Maayan Roth, Makoto Yok...