In this paper we present the design of a modern course in cluster computing and large-scale data processing. The defining differences between this and previously published designs are its focus on processing very large data sets and its use of Hadoop, an open source Java-based implementation of MapReduce and the Google File System as the platform for programming exercises. Hadoop proved to be a key element for successfully implementing structured lab activities and independent design projects. Through this course, offered at the University of Washington in 2007, we imparted new skills on our students, improving their ability to design systems capable of solving web-scale problems. Categories and Subject Descriptors K.3.2 [Computer and Information Science Education]: Computer science education General Terms Design, Experimentation Keywords Education, Hadoop, MapReduce, Clusters, Distributed computing