Sciweavers

495 search results - page 54 / 99
» Constructing States for Reinforcement Learning
Sort
View
161
Voted
CSL
2010
Springer
15 years 4 months ago
Bayesian update of dialogue state: A POMDP framework for spoken dialogue systems
This paper describes a statistically motivated framework for performing real-time dialogue state updates and policy learning in a spoken dialogue system. The framework is based on...
Blaise Thomson, Steve Young
ICML
2007
IEEE
16 years 4 months ago
Constructing basis functions from directed graphs for value function approximation
Basis functions derived from an undirected graph connecting nearby samples from a Markov decision process (MDP) have proven useful for approximating value functions. The success o...
Jeffrey Johns, Sridhar Mahadevan
176
Voted
ATAL
2007
Springer
15 years 8 months ago
On discovery and learning of models with predictive representations of state for agents with continuous actions and observations
Models of agent-environment interaction that use predictive state representations (PSRs) have mainly focused on the case of discrete observations and actions. The theory of discre...
David Wingate, Satinder P. Singh
IIE
2007
63views more  IIE 2007»
15 years 4 months ago
Investigation of Q-Learning in the Context of a Virtual Learning Environment
We investigate the possibility to apply a known machine learning algorithm of Q-learning in the domain of a Virtual Learning Environment (VLE). It is important in this problem doma...
Dalia Baziukaite
ICADL
2007
Springer
147views Education» more  ICADL 2007»
15 years 10 months ago
Feature Reinforcement Approach to Poly-lingual Text Categorization
With the rapid emergence and proliferation of Internet and the trend of globalization, a tremendous amount of textual documents written in different languages are electronically ac...
Chih-Ping Wei, Huihua Shi, Christopher C. Yang