Rather than painful, manual, static, per-connection optimization of TCP buffer sizes simply to achieve acceptable performance for distributed applications [8, 10], many researchers have proposed techniques to perform this tuning automatically [4,7,9,11,12,14]. This paper first discusses the relative merits of the various approaches in theory, and then provides substantial experimental data concerning two competing implementations – the buffer autotuning already present in Linux 2.4.x and “Dynamic Right-Sizing.” This paper reveals heretofore unknown aspects of the problem and current solutions, provides insight into the proper approach for different circumstances, and points toward ways to further improve performance.