Sciweavers

119 search results - page 17 / 24
» A Markov Reward Model Checker
Sort
View
CORR
2007
Springer
143views Education» more  CORR 2007»
13 years 6 months ago
On Myopic Sensing for Multi-Channel Opportunistic Access
We consider a multi-channel opportunistic communication system where the states of these channels evolve as independent and statistically identical Markov chains (the Gilbert-Elli...
Qing Zhao, Bhaskar Krishnamachari, Keqin Liu
TWC
2008
130views more  TWC 2008»
13 years 6 months ago
On myopic sensing for multi-channel opportunistic access: structure, optimality, and performance
We consider a multi-channel opportunistic communication system where the states of these channels evolve as independent and statistically identical Markov chains (the Gilbert-Elli...
Qing Zhao, Bhaskar Krishnamachari, Keqin Liu
QEST
2010
IEEE
13 years 4 months ago
Reasoning about MDPs as Transformers of Probability Distributions
We consider Markov Decision Processes (MDPs) as transformers on probability distributions, where with respect to a scheduler that resolves nondeterminism, the MDP can be seen as ex...
Vijay Anand Korthikanti, Mahesh Viswanathan, Gul A...
AAAI
1997
13 years 8 months ago
Structured Solution Methods for Non-Markovian Decision Processes
Markov Decision Processes (MDPs), currently a popular method for modeling and solving decision theoretic planning problems, are limited by the Markovian assumption: rewards and dy...
Fahiem Bacchus, Craig Boutilier, Adam J. Grove
IROS
2006
IEEE
121views Robotics» more  IROS 2006»
14 years 22 days ago
Planning and Acting in Uncertain Environments using Probabilistic Inference
— An important problem in robotics is planning and selecting actions for goal-directed behavior in noisy uncertain environments. The problem is typically addressed within the fra...
Deepak Verma, Rajesh P. N. Rao