Sciweavers

443 search results - page 71 / 89
» Modeling plan coordination in multiagent decision processes
Sort
View
AIPS
2006
13 years 9 months ago
Solving Factored MDPs with Exponential-Family Transition Models
Markov decision processes (MDPs) with discrete and continuous state and action components can be solved efficiently by hybrid approximate linear programming (HALP). The main idea ...
Branislav Kveton, Milos Hauskrecht
ICCSA
2010
Springer
14 years 25 days ago
Geospatial Analysis of Cooperative Works on Asymmetric Information Environment
In the so-called Information-Explosion Era, astronomical amount of information is ubiquitously produced and digitally stored. It is getting more and more convenient for cooperative...
Tetsuya Kusuda, Tetsuro Ogi
ICML
2008
IEEE
14 years 8 months ago
Reinforcement learning with limited reinforcement: using Bayes risk for active learning in POMDPs
Partially Observable Markov Decision Processes (POMDPs) have succeeded in planning domains that require balancing actions that increase an agent's knowledge and actions that ...
Finale Doshi, Joelle Pineau, Nicholas Roy
ICIC
2005
Springer
14 years 1 months ago
An Intelligent Assistant for Public Transport Management
This paper describes the architecture of a computer system conceived as an intelligent assistant for public transport management. The goal of the system is to help operators of a c...
Martín Molina
IJRR
2011
218views more  IJRR 2011»
13 years 2 months ago
Motion planning under uncertainty for robotic tasks with long time horizons
Abstract Partially observable Markov decision processes (POMDPs) are a principled mathematical framework for planning under uncertainty, a crucial capability for reliable operation...
Hanna Kurniawati, Yanzhu Du, David Hsu, Wee Sun Le...