A common way to represent a time series is to divide it into shortduration blocks, each of which is then represented by a set of basis functions. A limitation of this approach, ho...
Markov decision processes (MDPs) with discrete and continuous state and action components can be solved efficiently by hybrid approximate linear programming (HALP). The main idea ...
Markov decision processes (MDPs) with discrete and continuous state and action components can be solved efficiently by hybrid approximate linear programming (HALP). The main idea ...
In this paper we present a new surface reconstruction technique for piecewise smooth surfaces from point clouds, such as scans of architectural sites or man-made artifacts. The te...
It has been shown that adapting a dictionary of basis functions to the statistics of natural images so as to maximize sparsity in the coefficients results in a set of dictionary ...
In reinforcement learning, it is a common practice to map the state(-action) space to a different one using basis functions. This transformation aims to represent the input data i...
Abstract. Using radial basis function networks for function approximation tasks suffers from unavailable knowledge about an adequate network size. In this work, a measuring techni...
We present a view-based method for steering a robot in a network of positions; this includes navigation along a prerecorded path, but also allows for arbitrary movement of the robo...
Holger Friedrich, David Dederscheck, Eduard Rosert...
This paper summarizes research on a new emerging framework for learning to plan using the Markov decision process model (MDP). In this paradigm, two approaches to learning to plan...
Sridhar Mahadevan, Sarah Osentoski, Jeffrey Johns,...
A new spectral approach to value function approximation has recently been proposed to automatically construct basis functions from samples. Global basis functions called proto-val...