Sciweavers

119 search results - page 18 / 24
» A Markov Reward Model Checker
Sort
View
IJCAI
2003
13 years 8 months ago
Taming Decentralized POMDPs: Towards Efficient Policy Computation for Multiagent Settings
The problem of deriving joint policies for a group of agents that maximize some joint reward function can be modeled as a decentralized partially observable Markov decision proces...
Ranjit Nair, Milind Tambe, Makoto Yokoo, David V. ...
ATAL
2010
Springer
13 years 7 months ago
Risk-sensitive planning in partially observable environments
Partially Observable Markov Decision Process (POMDP) is a popular framework for planning under uncertainty in partially observable domains. Yet, the POMDP model is riskneutral in ...
Janusz Marecki, Pradeep Varakantham
UML
2001
Springer
13 years 11 months ago
UML Modelling and Performance Analysis of Mobile Software Architectures
Modern distributed software applications generally operate in complex and heterogeneous computing environments (like the World Wide Web). Different paradigms (client-server, mobili...
Vincenzo Grassi, Raffaela Mirandola
ICML
1995
IEEE
14 years 7 months ago
Learning Policies for Partially Observable Environments: Scaling Up
Partially observable Markov decision processes (pomdp's) model decision problems in which an agent tries to maximize its reward in the face of limited and/or noisy sensor fee...
Michael L. Littman, Anthony R. Cassandra, Leslie P...
ICRA
2008
IEEE
173views Robotics» more  ICRA 2008»
14 years 1 months ago
Bayesian reinforcement learning in continuous POMDPs with application to robot navigation
— We consider the problem of optimal control in continuous and partially observable environments when the parameters of the model are not known exactly. Partially Observable Mark...
Stéphane Ross, Brahim Chaib-draa, Joelle Pi...