Sciweavers

109 search results - page 2 / 22
» Model Checking Markov Reward Models with Impulse Rewards
Sort
View
ROMAN
2007
IEEE
134views Robotics» more  ROMAN 2007»
14 years 4 months ago
Learning Reward Modalities for Human-Robot-Interaction in a Cooperative Training Task
—This paper proposes a novel method of learning a users preferred reward modalities for human-robot interaction through solving a cooperative training task. A learning algorithm ...
Anja Austermann, Seiji Yamada
DSN
2002
IEEE
14 years 3 months ago
Model Checking Performability Properties
Model checking has been introduced as an automated technique to verify whether functional properties, expressed in a formal logic like computational tree logic (CTL), do hold in a...
Boudewijn R. Haverkort, Lucia Cloth, Holger Herman...
QEST
2006
IEEE
14 years 4 months ago
Bound-Preserving Composition for Markov Reward Models
Stochastic orders can be applied to Markov reward models and used to aggregate models, while introducing a bounded error. Aggregation reduces the number of states in a model, miti...
David Daly, Peter Buchholz, William H. Sanders
DSN
2008
IEEE
14 years 4 months ago
A recurrence-relation-based reward model for performability evaluation of embedded systems
Embedded systems for closed-loop applications often behave as discrete-time semi-Markov processes (DTSMPs). Performability measures most meaningful to iterative embedded systems, ...
Ann T. Tai, Kam S. Tso, William H. Sanders
ANSS
1996
IEEE
14 years 2 months ago
Computation of the Asymptotic Bias and Variance for Simulation of Markov Reward Models
The asymptotic bias and variance are important determinants of the quality of a simulation run. In particular, the asymptotic bias can be used to approximate the bias introduced b...
Aad P. A. van Moorsel, Latha A. Kant, William H. S...