Sciweavers

62 search results - page 12 / 13
» Probabilistic inference for solving discrete and continuous ...
Sort
View
CISS
2008
IEEE
14 years 1 months ago
Rate adaptation via link-layer feedback for goodput maximization over a time-varying channel
Abstract—We consider adapting the transmission rate to maximize the goodput, i.e., the amount of data transmitted without error, over a continuous Markov flat-fading wireless ch...
Rohit Aggarwal, Phil Schniter, Can Emre Koksal
AAAI
2012
11 years 9 months ago
Planning in Factored Action Spaces with Symbolic Dynamic Programming
We consider symbolic dynamic programming (SDP) for solving Markov Decision Processes (MDP) with factored state and action spaces, where both states and actions are described by se...
Aswin Raghavan, Saket Joshi, Alan Fern, Prasad Tad...
NIPS
1996
13 years 8 months ago
Multidimensional Triangulation and Interpolation for Reinforcement Learning
Dynamic Programming, Q-learning and other discrete Markov Decision Process solvers can be applied to continuous d-dimensional state-spaces by quantizing the state space into an arr...
Scott Davies
CDC
2008
IEEE
120views Control Systems» more  CDC 2008»
14 years 1 months ago
Approximate abstractions of discrete-time controlled stochastic hybrid systems
ate Abstractions of Discrete-Time Controlled Stochastic Hybrid Systems Alessandro D’Innocenzo, Alessandro Abate, and Maria D. Di Benedetto — This work proposes a procedure to c...
Alessandro D'Innocenzo, Alessandro Abate, Maria Do...
IJRR
2011
218views more  IJRR 2011»
13 years 2 months ago
Motion planning under uncertainty for robotic tasks with long time horizons
Abstract Partially observable Markov decision processes (POMDPs) are a principled mathematical framework for planning under uncertainty, a crucial capability for reliable operation...
Hanna Kurniawati, Yanzhu Du, David Hsu, Wee Sun Le...