Thin streams and interactive applications

Fig. 1: Maximum RTT, Average RTT and Application delay for 180 connections captured from an Anarchy Onlyne (Funcom) game area during a one-hour period. A line is inserted at 500ms delay, showing where the latency becomes very problematic for the player.
Fig. 2: Measured loss rate from the trace described in Fig. 1.
A large number of network services rely on IP and reliable transport protocols. For applications that consume their bandwidth share completely, loss is handled satisfactorily, even for latency-sensitive applications. When applications send small packets intermittently, on the other hand, they are spuriously experiencing extreme latencies before packets are delivered to the receiving application. We observed this after analysing a packet trace from Funcom (, and later traces from a range of other interactive applications, including Skype and VNC/RDP applications.

We noticed that these applications have several properties in common: 1) very high packet interarrival times which makes these flows completely oblivious to congestion control mechanisms (as they never expand the congestion window from the minimum). 2) The packet sizes are very small as a result of the applications' need to dispatch the packets as soon as possible. 3) the streams keep the behaviour from 1) and 2) throughout their entire lifetime.

The high packet interarrival times prevent lost packets from being recovered using fast retransmit as there are not enough packets on the wire to trigger three dupACKs. The effect is that all retransmissions are by timeout and subject to exponential backoffs upon consecutive losses. Furthermore, these flows contribute hardly at all to the reduction of loss probability on the bottleneck. If the congestion is light enough to allow the competing (greedy) flows to recover by fast retransmit, this gives the thin streams an unfair disadvantage.

Fig. 3: Examples of thin streams based on packet traces from a selection of applications. Common for all examples are small packets and high interarrival times between packets. In bold are examples of greedy streams for comparison. Note that the greedy streams' throughput is limited by network conditions (and congestion control), not the application.
Fig. 4: Loss rate for constant bit-rate (CBR) traffic when competing for a 1500~kbps bottleneck with 15 greedy TCP streams using TCP Cubic. CBR packet size is 186 Bytes. CBR packet interarrival time is 20ms. When using a packet based queue/drop scheme, the CBR traffic experiences a steady loss rate betwwn 9% and 10%. When using a Byte based queue/drop scheme, loss happens only when the queue size is approaching a multiple of the MSS (1500 Bytes).
To address the situation, we have developed backwards-compatible, sender-side-only improvements that reduce the application-layer latency even for receivers with unmodified TCP implementations: 1) Fast retransmit after one dupACK only when there are less than 4 packets in flight, 2) no exponential backoff when there are less than 4 packets in flight. 3) redundant data bundling (see below).

We implemented the mechanisms as modifications to the Linux kernel TCP stack and tested them against FreeBSD, Linux, Windows Vista, Windows 7 and OSX. The first two mechanisms have been accepted and included in the standard Linux kernel since version 2.6.34.

The redundant bundling mechanism (RDB) bundles unacknowledged data with new data whenever a packet of new data is sent, as long as the packet size is below the network MTU. This makes it possible to hide the loss events by recovering the data with the next sent packet, but adds extra data to every sent packet. This has consequences for TCP congestion control as it leads to loss events not being detected by the sender. Although the streams are very thin, given a large amount of RDB streams over a bottleneck, it has consequences to competing streams.

Our goal is to develop backwards compatible, sender side solutions that can be deployed and used in today's Internet, rather than experimental protocols that will never be deployed.

Our published papers on this subject can be found here:
Last modified: Sun Jul 1 10:20:14 CEST 2012