Sciweavers

NN
1997
Springer

Learning Dynamic Bayesian Networks

14 years 3 months ago
Learning Dynamic Bayesian Networks
Bayesian networks are directed acyclic graphs that represent dependencies between variables in a probabilistic model. Many time series models, including the hidden Markov models (HMMs) used in speech recognition and Kalman lter models used in ltering and control applications, can be viewed as examples of dynamic Bayesian networks. We rst provide a brief tutorial on learning and Bayesian networks. We then present some dynamic Bayesian networks that can capture much richer structure than HMMs and Kalman lters, including spatial and temporal multiresolution structure, distributed hidden state representations, and multiple switching linear regimes. While exact probabilistic inference is intractable in these networks, one can obtain tractable variational approximations which call as subroutines the forward-backward and Kalman lter recursions. These approximations can be used to learn the model parameters by maximizing a lower bound on the likelihood. Table of Contents
Zoubin Ghahramani
Added 08 Aug 2010
Updated 08 Aug 2010
Type Conference
Year 1997
Where NN
Authors Zoubin Ghahramani
Comments (0)