Sciweavers

509 search results - page 11 / 102
» Using Learning for Approximation in Stochastic Processes
Sort
View
CMSB
2006
Springer
13 years 11 months ago
Stronger Computational Modelling of Signalling Pathways Using Both Continuous and Discrete-State Methods
Abstract. Starting from a biochemical signalling pathway model expressed in a process algebra enriched with quantitative information we automatically derive both continuous-space a...
Muffy Calder, Adam Duguid, Stephen Gilmore, Jane H...
AIPS
2007
13 years 9 months ago
Discovering Relational Domain Features for Probabilistic Planning
In sequential decision-making problems formulated as Markov decision processes, state-value function approximation using domain features is a critical technique for scaling up the...
Jia-Hong Wu, Robert Givan
TNN
1998
111views more  TNN 1998»
13 years 7 months ago
Asymptotic distributions associated to Oja's learning equation for neural networks
— In this paper, we perform a complete asymptotic performance analysis of the stochastic approximation algorithm (denoted subspace network learning algorithm) derived from Oja’...
Jean Pierre Delmas, Jean-Francois Cardos
AAAI
2000
13 years 8 months ago
Localizing Search in Reinforcement Learning
Reinforcement learning (RL) can be impractical for many high dimensional problems because of the computational cost of doing stochastic search in large state spaces. We propose a ...
Gregory Z. Grudic, Lyle H. Ungar
ICML
2006
IEEE
14 years 8 months ago
Kernel Predictive Linear Gaussian models for nonlinear stochastic dynamical systems
The recent Predictive Linear Gaussian model (or PLG) improves upon traditional linear dynamical system models by using a predictive representation of state, which makes consistent...
David Wingate, Satinder P. Singh