Experimental computer systems research typically ignores the end-user, modeling him, if at all, in overly simple ways. We argue that this (1) results in inadequate performance evaluation of the systems, and (2) ignores opportunities. We summarize our experiences with (a) directly evaluating user satisfaction and (b) incorporating user feedback in different areas of client/server computing, and use our experiences to motivate principles for that domain. Specifically, we report on user studies to measure user satisfaction with resource borrowing and with different clock frequencies in desktop computing, the development and evaluation of user interfaces to integrate user feedback into scheduling and clock frequency decisions in this context, and results in predicting user action and system response in a remote display system. We also present initial results on extending our work to user control of scheduling and mapping of virtual machines in a virtualization-based distributed computing ...
Peter A. Dinda, Gokhan Memik, Robert P. Dick, Bin