Sciweavers

361 search results - page 6 / 73
» Approximate counting by dynamic programming
Sort
View
CORR
2010
Springer
119views Education» more  CORR 2010»
13 years 7 months ago
Dynamic Policy Programming
In this paper, we consider the problem of planning and learning in the infinite-horizon discounted-reward Markov decision problems. We propose a novel iterative direct policysearc...
Mohammad Gheshlaghi Azar, Hilbert J. Kappen
NIPS
2008
13 years 9 months ago
Biasing Approximate Dynamic Programming with a Lower Discount Factor
Most algorithms for solving Markov decision processes rely on a discount factor, which ensures their convergence. It is generally assumed that using an artificially low discount f...
Marek Petrik, Bruno Scherrer
JCP
2007
143views more  JCP 2007»
13 years 7 months ago
Noisy K Best-Paths for Approximate Dynamic Programming with Application to Portfolio Optimization
Abstract— We describe a general method to transform a non-Markovian sequential decision problem into a supervised learning problem using a K-bestpaths algorithm. We consider an a...
Nicolas Chapados, Yoshua Bengio
ICML
1995
IEEE
14 years 8 months ago
Stable Function Approximation in Dynamic Programming
The success ofreinforcement learninginpractical problems depends on the ability to combine function approximation with temporal di erence methods such as value iteration. Experime...
Geoffrey J. Gordon
HICSS
2009
IEEE
108views Biometrics» more  HICSS 2009»
14 years 2 months ago
Approximate Dynamic Programming in Knowledge Discovery for Rapid Response
One knowledge discovery problem in the rapid response setting is the cost of learning which patterns are indicative of a threat. This typically involves a detailed follow-through,...
Peter Frazier, Warren B. Powell, Savas Dayanik, Pa...