TCP's ability to share a bottleneck fairly and efficiently decreases as the number of competing flows increases. This effect starts to appear when there are more flows than packets in the delay-bandwidth product. In the limit of large numbers of flows, TCP forces a packet loss rate approaching 50%, causing delays that users are likely to notice. TCP’s minimum congestion window of one packet is the source of these problems: it causes a few flows to send too fast while the rest wait in re-transmission time-out. The particular packet loss rate is a function of TCP’s abrupt transition from exponential backoff to sending with a window of one or more packets, and of the high rate at which TCP increases small congestion windows. Analysis of packet traces suggests that these aspects of TCP's algorithms contribute substantially to the total loss rate observed on the Internet. One way to work around the problem is to make sure routers have not just one round-trip time of buffering...