Safely updating software at remote sites is a cautious balance of enabling new functionality and avoiding adverse effects on existing functionality. A useful first step in this process would be to evaluate the performance of a new version of a component on the current workload before enabling its functionality. This step would let the engineers assess the component’s performance over more (and more realistic) data points than by simply performing regression testing in-house. In this paper we propose to evaluate the performance of a new version of a component by (1) deploying it to remote sites, (2) running it in a controlled environment with the actual workloads being generated at that site, and (3) reporting the results back to the development engineers. Running the new version can either be done on-line, alongside the current system, or offline, using capture-replay techniques. By running at the remote site and reporting concise results, issues of data security, protection, and c...
Jonathan E. Cook, Alessandro Orso