This paper describes a general framework for converting online game playing algorithms into constrained convex optimization algorithms. This framework allows us to convert the wellestablished bounds on regret of online algorithms to prove convergence in the offline setting. The resulting algorithms are very simple to implement and analyze, and for some scenarios attain a better rate of convergence than previously known.