Sciweavers

15614 search results - page 36 / 3123
» The State of State
Sort
View
ECML
2005
Springer
14 years 2 months ago
Using Rewards for Belief State Updates in Partially Observable Markov Decision Processes
Partially Observable Markov Decision Processes (POMDP) provide a standard framework for sequential decision making in stochastic environments. In this setting, an agent takes actio...
Masoumeh T. Izadi, Doina Precup
INTERSPEECH
2010
13 years 3 months ago
Canonical state models for automatic speech recognition
Current speech recognition systems are often based on HMMs with state-clustered Gaussian Mixture Models (GMMs) to represent the context dependent output distributions. Though high...
Mark J. F. Gales, Kai Yu
NECO
2010
103views more  NECO 2010»
13 years 7 months ago
Posterior Weighted Reinforcement Learning with State Uncertainty
Reinforcement learning models generally assume that a stimulus is presented that allows a learner to unambiguously identify the state of nature, and the reward received is drawn f...
Tobias Larsen, David S. Leslie, Edmund J. Collins,...
INFORMATICALT
1998
96views more  INFORMATICALT 1998»
13 years 8 months ago
State Estimation of Dynamic Systems in the Presence of Time-Varying Outliers in Observations
Abstract. In the previous papers (Masreliez and Martin, 1977; Novoviˇcova, 1987; Schick and Mitter, 1994) the problem of recursive estimation of linear dynamic systems parameters ...
Rimantas Pupeikis
ICRA
2002
IEEE
97views Robotics» more  ICRA 2002»
14 years 1 months ago
Stochastic Cloning: A Generalized Framework for Processing Relative State Measurements
This paper introduces a generalized framework, termed “stochastic cloning,” for processing relative state measurements within a Kalman filter estimator. The main motivation a...
Stergios I. Roumeliotis, Joel W. Burdick