Traffic Shaping

Shaping the traffic against undesired and unexpectedTraffic shaping refers to computer networks manipulating internet traffic in order to achieve particular performance characteristics. Typically this involves bandwidth management techniques that delay less critical traffic, or drop packets from a certain flow, in order to allow higher quality service for more important traffic. A classic example of this involves granting higher priority (more bandwidth) to latency-sensitive applications, such as VoIP, video streaming, and real-time gaming traffic, where getting the required packets to their destination as quickly as possible is critical for how the application performs. If you think in terms of applications performing “services”, then traffic shaping can be said to improve the quality of the particular service.

Thus, QoS (quality of service) is closely tied to, and directly impacted by, proper traffic shaping execution. But quality of service often refers to higher, system-level policies and implementations, while traffic shaping is simply a technique for ensuring a specified QoS. While sometimes used interchangeably, QoS has many layers to it, from cataloging and prioritizing all traffic traversing a network, to toggling type-of-service (TOS) forwarding-bits in packet headers. For more information, see our recent blog, “Egress and Ingress QoS”.

Traffic shaping is generally achieved by a computer network identifying which applications are traversing the network, and then enforcing the desired traffic profile by simply capping the rate of lower priority flows by delaying and dropping some of their packets by using a buffering queue. This is referred to as “application-based traffic shaping”. However, when the traffic is encrypted, the underlying application may be impossible to determine, so another algorithm, “route-based traffic shaping” is used. In route-based traffic shaping, traffic is categorized according to some combination of source and destination IP addresses and ports, and the protocol being used. Modern Broadband Bonding routers may able to apply traffic shaping to specific traffic within an encrypted tunnel if the TOS bit can be set prior to being encrypted. In any case, the actual traffic shaping is then performed using some variation of either the “leaky bucket” or the “token bucket” algorithm.

Leaky Bucket and Token Bucket Algorithms

Both algorithms use FIFO (first-in first-out) buffering queues to control the traffic flow, both in terms of burstiness and average rate.

For the leaky bucket, whenever packets conform to the traffic profile and needs to be shaped, before they are transmitted out of the network, they are placed into a buffering queue, or bucket. The bucket “leaks” packets at a constant rate, determined by the traffic shaping policy, which is essentially the allowed bandwidth for the given traffic type. If the host tries to send these packets above the allowed bandwidth, they will be dumped into the leaky bucket and then “leak out” at the desired, fixed rate. Similarly, if the traffic is bursty, and attempts to send many packets (or very large packets) exceed the traffic profile, the leaky bucket will average out this burstiness and instead “drip” the packets at the desired rate.

The leaky bucket is a very simple and effective method of shaping traffic, but it does have a significant shortcoming – if data is being poured into the bucket faster than the bucket is leaking, then at some point the bucket (buffer) will fill up, and additional packets cannot be added, leaving them to be discarded. To address this shortcoming, the token bucket algorithm made a few changes.

The token bucket accumulates tokens at a fixed rate. The tokens represent a given number of bytes of data. When the system is ready to send a packet that needs to be traffic shaped, it removes the requisite number of tokens from the bucket and sends the packet on its way. If there are insufficient tokens available, the given data may be held until there are enough tokens available, it may be sent along with a flag to indicate that this data does not conform to the specified traffic profile where it may be delayed at a later point, or it may simply be dropped outright. The token bucket therefore allows for some burstiness, since as tokens accumulate, the short-term bandwidth is essentially increased in proportion to the number of tokens.

So, to summarize these two techniques:

Leaky Bucket

  • Always outputs at constant rate
  • Smooths burstiness
  • If bucket fills up, may result in dropped packets

Token Bucket

  • Output rate can vary below some predetermined max
  • Allows for some burstiness
  • If bucket fills up, token are not added, and no packets are discarded as long as the burst are short

Note that in both cases, excessive ingestion of packets for a long period of time will cause packets to be dropped from the buffer without being transmitted over the wire and this will in turn regulate the rate of flow if the flow has an application level flow control or is using a transport protocol that implements rate control such as TCP.

As long as these buffers behave reasonably, meaning they are able to function as designed and are not loaded to maximum capacity for extended periods (buffer bloat), and a sensible QoS policy has been implemented, the network should also behave fairly well. However, if you or your network suffers from buffer bloat, then please read this recent blog at once, Bufferbloat – What is it and why you (or your vendor) should care.

A Quick Look Under the Hood (Linux Traffic Control)

Implementing sophisticated traffic shaping techniques ultimately relies on the Linux kernel and IP stack, the technology underlying the vast majority of internet routers. Linux provides a rich assortment of tools and options in order to facilitate many kinds of traffic control. While the details are beyond the scope of this blog, let’s take a quick look at how Linux performs its traffic control. Much of this discussion will be familiar to network administrators and network engineers.

According to the Linux Documentation Project, “Traffic control is the name given to the sets of queuing systems and mechanisms by which packets are received and transmitted on a router.”

Note that a queue is simply a finite buffer, but Linux integrates multiple queues and sub-queues with the mechanisms to prioritize, rearrange, delay and drop packets in these multiple queues. So, Linux traffic control provides a suite of Linux commands and variables designed to perform the necessary steps to implement a robust traffic control system.

Components of Linux Traffic Control

The Linux Documentation Project (LDP) defines the following 6 components as part of Linux traffic control:

  • Shaping
    • Shapers delay packets to meet a desired rate
  • Scheduling
    • Schedulers arrange and/or rearrange packets for output
  • Classifying
    • Classifiers sort or separate traffic into queues
  • Policing
    • Policers measure and limit traffic in a particular queue
  • Marking
    • Marking is a mechanism by which the packet is altered
  • Dropping
    • Dropping discards an entire packet, flow or classification

The LDP also summarizes common traffic control goals and solutions:

  • Limit total bandwidth to a known rate; TBFHTBwith child class(es)
  • Limit the bandwidth of a particular user, service or client; HTB classes.
  • Maximize TCP throughput on an asymmetric link; prioritize transmission of ACK packets.
  • Reserve bandwidth for a particular application or user; HTB with children classes and classifying.
  • Prefer latency sensitive traffic; PRIO inside an HTB class.
  • Managed oversubscribed bandwidth; HTB with borrowing.
  • Allow equitable distribution of unreserved bandwidth; HTB with borrowing.
  • Ensure that a particular type of traffic is dropped; policer attached to a filter with a drop action.

Note that two of the most common techniques for Linux traffic control involve using the token bucket algorithm discussed earlier – TBF is the Token Bucket Filter, and HTB is the Hierarchical Token Bucket, which is essentially our standard token bucket on steroids – it implements class-based systems, filters, and a complex borrowing model to perform a wide variety of granular and sophisticated traffic control techniques.

For an in-depth discussion of Linux traffic control, please visit Introduction to Linux Traffic Control by the Linux Documentation Project.

Traffic Shaping on SD-WAN Devices

Performing QoS by using Linux traffic control commands is tremendously powerful and flexible, but very tedious, error-prone, and time consuming. Fortunately, implementing QoS on a modern SD-WAN router is much, much easier and provides similar capabilities via a user-friendly web-based interface. Let’s take a quick look at how you would configure QoS and traffic shaping on the Mushroom Networks Truffle 4000 – one of the industry leading SD-WAN enterprise routers.

In order to use the QoS feature, a user must first create the WAN Shaper rule (“Quality of Service Shaper”). A rate that is 85% of the rate provided by the ISP should be good value that can be used as the shaper rule both for uplink (egress) and downlink (ingress), as this will allow some minor fluctuations of the ISP performance which might be out of the user’s control.

In the example below, the ISP uplink rate is assumed to be 705 Kbps, so the egress shaper for WAN1 is set to 85% of the available bandwidth, or 600 Kbps:

Note that we also specify the rate to reserve for unclassified traffic as 10% of available bandwidth, and the buffer to store unclassified packets will be set to 5 packets.

After creating the shaper rule for both ingress and egress, the application traffic bandwidth reservation and prioritization can be created using “Quality of Service Reservations”. The following example illustrates how to reserve 200 Kbps uplink for IPSec traffic by specifying the ESP (Encapsulating Security Payload) protocol – note that this bandwidth reservation applies to ESP traffic arriving from any source IP address and port, and destined for any destination IP address and port:

For prioritizing real time traffic, the “Priority” field can be changed from “Normal Priority” to “High Priority”. 

Mushroom Networks devices also support application-aware layer-7 traffic filtering for QoS reservations. Using the application-aware traffic filtering capability one can reserve and also assign a high priority for SIP traffic.  The below configuration example assigns 300 Kbps guaranteed rate and high priority for VoIP traffic:

Mushroom Networks devices also have automated QoS capabilities built into its overlay tunnels (such as the App Armor tunnel) with abilities such as identifying traffic automatically and accordingly implementing advanced QoS algorithms per flow class. However, the details of those advanced capabilities will be covered in a future blog post, so stay tuned.


Traffic shaping is critical for networks to ensure a desired quality of service for given traffic types. Common uses of traffic shaping include:

  • Business critical traffic may be given priority over less critical traffic
  • Traffic that requires low latency, such as VoIP, video conferencing, real-time gaming may be given priority over traffic that is less time sensitive, such as bulk file transfers or backups
  • ISPs may shape traffic based on customer priority or usage. Note that ISP throttling of their user’s bandwidth is a common use of traffic shaping among ISPs

Any network edge device that regulates traffic flows into and out of your network should have sophisticated QoS and traffic shaping capabilities that are easy to understand and implement.

Rob Stone, Mushroom Networks, Inc. 

Mushroom Networks is the provider of Broadband Bonding appliances that put your networks on auto-pilot. Application flows are intelligently routed around network problems such as latency, jitter and packet loss. Network problems are solved even before you can notice.



© 2004 – 2024 Mushroom Networks Inc. All rights reserved.

Let’s chat. Call us at +1 (858) 452-1031 or fill the form: