Two schools of thoughts have emerged over the recent debate on internet router buffer sizing. One school argues that the presence of a large number of flows leads to traffic desynchronization which therefore requires a small buffer size. The other school, however, argues that doing so creates instability in the network and thus causes degradation in throughput. In this work, we use theoretical analysis based on a mean-field theory to demonstrate that the missing link between the above two arguments is the fairness in packet dropping at the buffer. This mean-field theory provides a simple and yet quantitative tool to analyze the dynamics between the TCP flows and the queue length. Our analysis shows that, for the widely deployed drop-tail queue management scheme, there is a trade-off between the desynchronization among the flows and the fairness in the packet dropping process.