Reinforcement Learning methods for controlling stochastic processes typically assume a small and discrete action space. While continuous action spaces are quite common in real-wor...
Hybrid discrete-continuous models, such as Jump Markov Linear Systems, are convenient tools for representing many real-world systems; in the case of fault detection, discrete jumps...
Lars Blackmore, Askar Bektassov, Masahiro Ono, Bri...
Markov decision processes (MDPs) with discrete and continuous state and action components can be solved efficiently by hybrid approximate linear programming (HALP). The main idea ...
Given a continuous function f : X → IR on a topological space X, its level set f−1 (a) changes continuously as the real value a changes. Consequently, the connected components...
Thinning of a binary object is an iterative layer by layer erosion to extract an approximation to its skeleton. In order to provide topology preservation, different thinning techn...