Reward-based scheduling refers to the problem in which there is a reward associated with the execution of a task. In our framework, each real-time task comprises a mandatory and an optional part, with which a nondecreasing reward function is associated. Imprecise Computation and IncreasedReward-with-Increased-Service models fall within the scope of this framework. In this paper, we address the reward-based scheduling problem for periodic tasks. For linear and concave reward functions we show: (a) the existence of an optimal schedule where the optional service time of a task is constant at every instance and (b) how to efficiently compute this service time. We also prove that RMS-h (RMS with harmonic periods), EDF and LLF policies are optimal when used with the optimal service times we computed, and that the problem becomes NP-Hard, when the reward functions are convex. Further, our solution eliminates run-time overhead, and makes possible the use of existing scheduling disciplines.
Hakan Aydin, Rami G. Melhem, Daniel Mossé,