Partially observable Markov decision processes (POMDPs) are widely used for planning under uncertainty. In many applications, the huge size of the POMDP state space makes straightf...
Joni Pajarinen, Jaakko Peltonen, Ari Hottinen, Mik...
Abstract--The difficulties encountered in sequential decisionmaking problems under uncertainty are often linked to the large size of the state space. Exploiting the structure of th...
Many sequence labeling tasks in NLP require solving a cascade of segmentation and tagging subtasks, such as Chinese POS tagging, named entity recognition, and so on. Traditional p...
This article comprehensively surveys the work accomplished during the past decade on an approach to analyze concurrent systems qualitatively and quantitatively, by combining functi...
Nicolas Coste, Hubert Garavel, Holger Hermanns, Fr...
Abstract— We consider the problem of apprenticeship learning when the expert’s demonstration covers only a small part of a large state space. Inverse Reinforcement Learning (IR...
Abstract. The success of industrial-scale model checkers such as Uppaal [3] or NuSMV [12] relies on the efficiency of their respective symbolic state space representations. While d...
In this paper we give sufficient conditions for a bang-bang regular extremal to be a strong local optimum for a control problem in the Mayer form; strong means that we consider the...
Andrei A. Agrachev, Gianna Stefani, PierLuigi Zezz...
An issue that is critical for the application of Markov decision processes MDPs to realistic problems is how the complexity of planning scales with the size of the MDP. In stochas...
High-level formalisms such as stochastic Petri nets can be used to model complex systems. Analysis of logical and numerical properties of these models often requires the generatio...
Gianfranco Ciardo, Joshua Gluckman, David M. Nicol
This paper contains measures to describe the matrix impulse response sensitivity of state space multivariable systems with respect to parameter perturbations.The parameter sensitiv...