With large classes and high demands on the time of teaching academics, (as well as the need to keep marking budgets under control) evaluating the functional correctness of programming assignments can be challenging. Entirely automating the evaluation process may seem desirable but that would deny students formative feedback from more experienced programmers. This in turn reduces their opportunity to correct errors in their practice. Instead, this paper contains a discussion of marking processes where much of the “heavy lifting” or repetitive work is automated but still allows for human feedback. We discuss the impact of automated marking on assessment design, students, and where the hard work is hidden. The literature contains descriptions of many projects for automating various parts of the process with varying interfaces and levels of integration with external systems. In the author’s opinion though, that they are not strictly required, and we describe a simpler set of require...