Sciweavers

IPPS
2007
IEEE

Packet Loss Burstiness: Measurements and Implications for Distributed Applications

14 years 5 months ago
Packet Loss Burstiness: Measurements and Implications for Distributed Applications
Many modern massively distributed systems deploy thousands of nodes to cooperate on a computation task. Network congestions occur in these systems. Most applications rely on congestion control protocols such as TCP to protect the systems from congestion collapse. Most TCP congestion control algorithms use packet loss as signal to detect congestion. In this paper, we study the packet loss process in subround-trip-time (sub-RTT) timescale and its impact on the loss-based congestion control algorithms. Our study suggests that the packet loss in sub-RTT timescale is very bursty. This burstiness leads to two effects. First, the subRTT burstiness in packet loss process leads to complicated interactions between different loss-based algorithms. Second, the sub-RTT burstiness in packet loss process makes the latency of data transfers under TCP hard to predict. Our results suggest that the design of a distributed system has to seriously consider the nature of packet loss process and carefully s...
David X. Wei, Pei Cao, Steven H. Low
Added 03 Jun 2010
Updated 03 Jun 2010
Type Conference
Year 2007
Where IPPS
Authors David X. Wei, Pei Cao, Steven H. Low
Comments (0)