—We propose a dynamic spectrum access scheme where secondary users recommend “good” channels to each other and access accordingly. We formulate the problem as an average rewa...
Abstract--Sensors equipped with energy harvesting and cooperative communication capabilities are a viable solution to the power limitations of Wireless Sensor Networks (WSNs) assoc...
In ergodic MDPs we consider stationary distributions of policies that coincide in all but n states, in which one of two possible actions is chosen. We give conditions and formulas...
We consider a general adversarial stochastic optimization model. Our model involves the design of a system that an adversary may subsequently attempt to destroy or degrade. We int...
Matthew D. Bailey, Steven M. Shechter, Andrew J. S...
Efficient representations and solutions for large decision problems with continuous and discrete variables are among the most important challenges faced by the designers of automa...
Branislav Kveton, Milos Hauskrecht, Carlos Guestri...
Probabilistic planning problems are typically modeled as a Markov Decision Process (MDP). MDPs, while an otherwise expressive model, allow only for sequential, non-durative action...
This paper models and analyzes serial production lines with specialists at each station and a single, cross-trained floating worker who can work at any station. We formulate Marko...
Linn I. Sennott, Mark P. Van Oyen, Seyed M. R. Ira...
We study the convergence of Markov Decision Processes made of a large number of objects to optimization problems on ordinary differential equations (ODE). We show that the optimal...
Abstract. In the aftermath of a large-scale disaster, agents’ decisions derive from self-interested (e.g. survival), common-good (e.g. victims’ rescue) and teamwork (e.g. fire...
A Markov Decision Process (MDP) is a general model for solving planning problems under uncertainty. It has been extended to multiobjective MDP to address multicriteria or multiagen...