Sequential Minimal Optimization (SMO) is currently the most popular algorithm to solve large quadratic programs for Support Vector Machine (SVM) training. For many variants of this iterative algorithm proofs of convergence to the optimum exist. Nevertheless, to find such proofs for elaborated SMO-type algorithms is challenging in general. We provide a basic tool for such convergence proofs in the context of cache-friendly working set selection. Finally this result is applied to notably simplify the convergence proof of the very efficient Hybrid Maximum Gain algorithm.