Sciweavers

258 search results - page 25 / 52
» Continuous Capacities on Continuous State Spaces
Sort
View
NIPS
2003
13 years 10 months ago
Gaussian Processes in Reinforcement Learning
We exploit some useful properties of Gaussian process (GP) regression models for reinforcement learning in continuous state spaces and discrete time. We demonstrate how the GP mod...
Carl Edward Rasmussen, Malte Kuss
AIPS
1996
13 years 10 months ago
Planning Experiments: Resolving Interactions between Two Planning Spaces
Learning from experimentation allows a system to acquire planning domain knowledge by correcting its knowledge when an action execution fails. Experiments are designed and planned...
Yolanda Gil
ICRA
2009
IEEE
259views Robotics» more  ICRA 2009»
14 years 3 months ago
Constructing action set from basis functions for reinforcement learning of robot control
Abstract— Continuous action sets are used in many reinforcement learning (RL) applications in robot control since the control input is continuous. However, discrete action sets a...
Akihiko Yamaguchi, Jun Takamatsu, Tsukasa Ogasawar...
TIT
2010
101views Education» more  TIT 2010»
13 years 3 months ago
Separation principles in wireless networking
A general wireless networking problem is formulated whereby end-to-end user rates, routes, link capacities, transmitpower, frequency and power resources are jointly optimized acros...
Alejandro Ribeiro, Georgios B. Giannakis
AAAI
2008
13 years 11 months ago
Reducing Particle Filtering Complexity for 3D Motion Capture using Dynamic Bayesian Networks
Particle filtering algorithms can be used for the monitoring of dynamic systems with continuous state variables and without any constraints on the form of the probability distribu...
Cédric Rose, Jamal Saboune, François...