Sciweavers

71 search results - page 9 / 15
» A Behavior Adaptation Algorithm based on Hierarchical Partia...
Sort
View
AI
2006
Springer
13 years 11 months ago
Belief Selection in Point-Based Planning Algorithms for POMDPs
Abstract. Current point-based planning algorithms for solving partially observable Markov decision processes (POMDPs) have demonstrated that a good approximation of the value funct...
Masoumeh T. Izadi, Doina Precup, Danielle Azar
ICML
2007
IEEE
14 years 8 months ago
Multi-task reinforcement learning: a hierarchical Bayesian approach
We consider the problem of multi-task reinforcement learning, where the agent needs to solve a sequence of Markov Decision Processes (MDPs) chosen randomly from a fixed but unknow...
Aaron Wilson, Alan Fern, Soumya Ray, Prasad Tadepa...
ICASSP
2008
IEEE
14 years 1 months ago
Bayesian update of dialogue state for robust dialogue systems
This paper presents a new framework for accumulating beliefs in spoken dialogue systems. The technique is based on updating a Bayesian Network that represents the underlying state...
Blaise Thomson, Jost Schatzmann, Steve Young
EVOW
2009
Springer
13 years 11 months ago
Efficient Signal Processing and Anomaly Detection in Wireless Sensor Networks
In this paper the node-level decision unit of a self-learning anomaly detection mechanism for office monitoring with wireless sensor nodes is presented. The node-level decision uni...
Markus Wälchli, Torsten Braun
PKDD
2010
Springer
164views Data Mining» more  PKDD 2010»
13 years 5 months ago
Efficient Planning in Large POMDPs through Policy Graph Based Factorized Approximations
Partially observable Markov decision processes (POMDPs) are widely used for planning under uncertainty. In many applications, the huge size of the POMDP state space makes straightf...
Joni Pajarinen, Jaakko Peltonen, Ari Hottinen, Mik...