Sciweavers

119 search results - page 6 / 24
» A Markov Reward Model Checker
Sort
View
AIPS
2006
13 years 8 months ago
Probabilistic Planning with Nonlinear Utility Functions
Researchers often express probabilistic planning problems as Markov decision process models and then maximize the expected total reward. However, it is often rational to maximize ...
Yaxin Liu, Sven Koenig
ECML
2005
Springer
14 years 6 days ago
Active Learning in Partially Observable Markov Decision Processes
This paper examines the problem of finding an optimal policy for a Partially Observable Markov Decision Process (POMDP) when the model is not known or is only poorly specified. W...
Robin Jaulmes, Joelle Pineau, Doina Precup
IJCAI
2001
13 years 8 months ago
Complexity of Probabilistic Planning under Average Rewards
A general and expressive model of sequential decision making under uncertainty is provided by the Markov decision processes (MDPs) framework. Complex applications with very large ...
Jussi Rintanen
QEST
2006
IEEE
14 years 21 days ago
Limiting Behavior of Markov Chains with Eager Attractors
We consider discrete infinite-state Markov chains which contain an eager finite attractor. A finite attractor is a finite subset of states that is eventually reached with prob...
Parosh Aziz Abdulla, Noomene Ben Henda, Richard Ma...

Publication
233views
12 years 5 months ago
Sparse reward processes
We introduce a class of learning problems where the agent is presented with a series of tasks. Intuitively, if there is relation among those tasks, then the information gained duri...
Christos Dimitrakakis