Sciweavers

499 search results - page 62 / 100
» Model Minimization in Markov Decision Processes
Sort
View
ICIP
2005
IEEE
14 years 10 months ago
Joint feature-spatial-measure space: a new approach to highly efficient probabilistic object tracking
In this paper we present a probabilistic framework for tracking objects based on local dynamic segmentation. We view the segn to be a Markov labeling process and abstract it as a ...
Feng Chen, XiaoTong Yuan, ShuTang Yang
ICASSP
2008
IEEE
14 years 3 months ago
Multimodal information fusion using the iterative decoding algorithm and its application to audio-visual speech recognition
The fusion of information from heterogenous sensors is crucial to the effectiveness of a multimodal system. Noise affect the sensors of different modalities independently. A good ...
Shankar T. Shivappa, Bhaskar D. Rao, Mohan M. Triv...
JSAC
2011
82views more  JSAC 2011»
13 years 3 months ago
Optimal Cognitive Access of Markovian Channels under Tight Collision Constraints
Abstract—The problem of cognitive access of channels of primary users by a secondary user is considered. The transmissions of primary users are modeled as independent continuous-...
Xin Li, Qianchuan Zhao, Xiaohong Guan, Lang Tong
ATAL
2007
Springer
14 years 2 months ago
Letting loose a SPIDER on a network of POMDPs: generating quality guaranteed policies
Distributed Partially Observable Markov Decision Problems (Distributed POMDPs) are a popular approach for modeling multi-agent systems acting in uncertain domains. Given the signi...
Pradeep Varakantham, Janusz Marecki, Yuichi Yabu, ...
ICML
2001
IEEE
14 years 9 months ago
Continuous-Time Hierarchical Reinforcement Learning
Hierarchical reinforcement learning (RL) is a general framework which studies how to exploit the structure of actions and tasks to accelerate policy learning in large domains. Pri...
Mohammad Ghavamzadeh, Sridhar Mahadevan