Sciweavers

499 search results - page 46 / 100
» Model Minimization in Markov Decision Processes
Sort
View
IAT
2005
IEEE
14 years 2 months ago
Decomposing Large-Scale POMDP Via Belief State Analysis
Partially observable Markov decision process (POMDP) is commonly used to model a stochastic environment with unobservable states for supporting optimal decision making. Computing ...
Xin Li, William K. Cheung, Jiming Liu
CDC
2009
IEEE
134views Control Systems» more  CDC 2009»
14 years 1 months ago
Event-based control using quadratic approximate value functions
Abstract— In this paper we consider several problems involving control with limited actuation and sampling rates. Event-based control has emerged as an attractive approach for ad...
Randy Cogill
CAV
2010
Springer
185views Hardware» more  CAV 2010»
13 years 9 months ago
Achieving Distributed Control through Model Checking
Abstract. We apply model checking of knowledge properties to the design of distributed controllers that enforce global constraints on concurrent systems. We calculate when processe...
Susanne Graf, Doron Peled, Sophie Quinton
JELIA
2004
Springer
14 years 2 months ago
Hierarchical Decision Making by Autonomous Agents
Abstract. Often, decision making involves autonomous agents that are structured in a complex hierarchy, representing e.g. authority. Typically the agents share the same body of kno...
Stijn Heymans, Davy Van Nieuwenborgh, Dirk Vermeir
ATAL
2006
Springer
14 years 14 days ago
Winning back the CUP for distributed POMDPs: planning over continuous belief spaces
Distributed Partially Observable Markov Decision Problems (Distributed POMDPs) are evolving as a popular approach for modeling multiagent systems, and many different algorithms ha...
Pradeep Varakantham, Ranjit Nair, Milind Tambe, Ma...