With the holiday season well in our rear-view mirrors, it’s time we talk about that embarrassing problem…Bufferbloat. But unlike that bloated feeling many of us get during the holidays, this kind of bloating can have devastating consequences; your VoIP call can degrade or fail, your Netflix or YouTube video might stutter or freeze, or worst of all, your online massively multi-player state-of-the-art video game might not respond quickly enough, leaving you quite dead, even though for sure you executed that power-up perfectly.
So, what exactly is bufferbloat? Wikipedia defines it as:
“Bufferbloat is a cause of high latency in packet-switched networks caused by excess buffering of packets. Bufferbloat can also cause packet delay variation (also known as jitter), as well as reduce the overall network throughput. When a router or switch is configured to use excessively large buffers, even very high-speed networks can become practically unusable for many interactive applications like voice over IP (VoIP), online gaming, and even ordinary web surfing.”
Let’s try gaining a little more insight and intuition into this phenomenon. The classic way to better understand bufferbloat is to use cars travelling on a highway to represent internet packets traversing the internet. A non-technical discussion can be found here, and we summarize the key points below.
Cars on a crowded highway
So individual cars represent individual internet packets and the number of lanes on the highway represents overall bandwidth. Just like traffic jams snarl traffic and cause accidents, an equivalent process occurs when your internet connection is “stressed” or “under load”. As your internet connection starts bumping up against your maximum bandwidth allocation, there will be increased packet loss and your internet connection will become less responsive and more frustrating. Clearly, some sort of congestion control is needed.
Early developers of the internet realized that buffering the packets during high traffic times can reduce packet loss and appears to help overall performance. In our traffic analogy, these packet buffers are replaced by temporary overflow parking lots, with traffic cops controlling the flow of traffic in and out of these parking lots. As increased traffic bumps up against bandwidth limitations, the cop directs more and more cars into the parking lot, and releases the cars as overall traffic flow allows. Cars that previously might have crashed are now simply delayed by a bit. Packets that would have previously been lost are no longer lost, but simply delayed. Problem solved! Whew!
Parking lot full!
But wait! Not so fast! Over the last several decades, the cost of memory has plummeted and larger and larger packet buffers have proliferated and have been designed into virtually all network devices and appliances. And the negative affect that these large buffers have on network performance becomes more and more apparent. Let’s dig a little deeper.
Going back to our traffic analogy, as long as there are relatively few cars using our overflow parking lots, everything seems fine and traffic continues to flow, albeit with some increased travel time for a few cars. Not a big deal. But as the overflow lot fills up, problems arise. By its nature, the overflow lot acts as a FIFO (first in – first out) queue, which simply releases cars back onto the internet highway in the order they arrived. Seems fair enough. But what about that ambulance with a critical patient in the back? What about that firetruck racing to put out a fire? And most importantly, what about your Amazon delivery that was promised to you by 5 pm? What is really needed is some sort of way of prioritizing certain types of traffic and de-prioritizing others. This type of congestion control can be implemented by performing traffic shaping (QoS – quality of service) and active or smart queue management (AQM/SQM). Now we imagine a fast lane in our overflow lot that only accepts emergency vehicles and releases them back into traffic much faster than normal traffic. Maybe there’s also a slow lane, where delivery time isn’t important as long as the traffic gets to where it’s going.
Queue management and QoS are clearly related but operate in different domains. We could think of QoS as tailoring the traffic (shaping the traffic) with respect to the highway itself, while queue management deals exclusively with the overflow lots.
What about TCP and other internet protocols?
The analogy between bufferbloat and traffic overflow lots is a good one and gives us a much better intuitive feel for the issue. However, it is still much simplified and overlooks a fundamental problem. As part of a more detailed technical discussion, specific internet protocols become critical in understanding network behavior in the presence of bufferbloat. In particular, the major internet transmission protocol, TCP, constantly sends messages back and forth during data transfer between the sender and the receiver. The protocol monitors packet loss and keeps tweaking its transfer rate to optimize the connection speed while minimizing packet loss. In fact, TCP relies on packet loss information to tell it to slow down transmission. If your internet connection is maxed out with respect to bandwidth, you would expect packet loss to increase and TCP will respond accordingly by reducing its transmission rate. But if you now shield packet loss to the system by using very large packet buffers everywhere, what does TCP see? No packet loss! So, TCP does what it’s supposed to do and increases its transmission rate – precisely the wrong thing to do. More and more packets are queued to these buffers, the buffers bloat, and network performance suffers as bursts and spikes in latency and bandwidth occur, crippling performance.
And the problem is even worse than this, as other critical network protocols now have to wait in long packet queues before being released on their merry way. DNS, ARP, DHCP, and NTP packets can be delayed by seconds, or lost altogether, and basic internet functions such as resolving website names or IP addresses, keeping system time synchronized, and receiving IP addresses from DHCP servers become less responsive or fail outright. All a result of bufferbloat!
What to do about bufferbloat? QoS and queue management
Of course, the bufferbloat problem is very much internet-wide, and bloated buffers at some unknown location upstream of your home or office router are going to cause you some headaches no matter what you do. However, some modern routers do allow for some QoS traffic shaping and/or congestion control, which allows you to customize and carve up your available bandwidth as you see fit. This technique is especially useful if your home internet connection is shared between several (or many) family members and devices or in corporate networks where internet bandwidth is a hotly contested commodity among users and applications.
A nice introduction on this topic can be found here, which discusses QoS in terms of prioritizing traffic based on 4 classes:
For each traffic class, you would specify a bandwidth reservation (in Kbps, Mbps, or percentage of total bandwidth) that would be applied whenever your router was under stress (near full bandwidth utilization) and a QOS rule existed. As long as your home network usage was relatively small, all traffic is passed equally. But if your network suddenly gets clogged (one kid streaming Netflix, another kid downloading bit torrents, a third kid updating his Windows computer, and perhaps the family dog is shopping on Amazon for dog biscuits) then all of a sudden the work computer from your home office grinds to a halt. If you had established a QOS bandwidth reservation of 30%, then the router would have capped all other internet traffic from your home to a maximum of 70% (of your total available bandwidth), leaving your work computer with a dedicated chunk of bandwidth.
So, QOS can definitely help ensure that your home (or small office) network avoids bufferbloat – locally at least – and does not contribute any additional latency.
Active or smart queue management refers to various congestion control algorithms that manufacturers have implemented in their routers and other network devices. These algorithms monitor packet buffers and intelligently manage them, allowing high priority, low-latency packets to pass through the queue quicker, and deliberately trashing packets in queues that are getting too long. This type of queue management is beyond the capabilities and comfort levels of most casual internet users, but modern internet equipment vendors should specify what sort of AQM/SQM their devices use.
For business networks next generation SD-WAN solutions that leverage advanced QoS, traffic shaping, dynamic bandwidth reservations and SD-WAN overlays that has the ability to learn and adapt to your network conditions and traffic patterns will automate your network for you. These highly automated SD-WAN solutions are quite cost-effective and widely available to businesses from small mom-and-pop shops to larger multi-national businesses. Since they do not require networking know-how, they are ideal for SMBs.
We’ve discussed bufferbloat and understand better what it is, and how it negatively affects our internet performance. As mentioned above, bufferbloat exists throughout the internet and bottlenecks occur routinely, causing bandwidth fluctuations, and more to the point, increased latency and jitter. While you have no control over these upstream bottlenecks, you may want to evaluate your home or small-office router and network to see if you’re experiencing bufferbloat locally. If it’s a major issue for you, you may want to adjust your router’s queue management and/or QOS settings and see if you can find a satisfactory solution.
The website www.bufferbloat.net is an excellent resource and one I’ve quoted from several times for this blog. Check this link for a more comprehensive discussion of how to test for bufferbloat, or here for a nice discussion of specific steps you can take to address bufferbloat, as well as some specific routers that implement some form of queue management.
Rob Stone, Mushroom Networks, Inc.
Mushroom Networks is the provider of Broadband Bonding appliances that put your networks on auto-pilot. Application flows are intelligently routed around network problems such as latency, jitter and packet loss. Network problems are solved even before you can notice.
© 2004 – 2020 Mushroom Networks Inc. All rights reserved.
Download your copy of rare tips and tricks for a better WAN. Get your free copy today!
We respect your privacy.