Understanding Link Load Balancing vs Server Load Balancing

Link load balancers compared to server link balancers - both manage trafficLoad balancing is a term that is frequently used in network and data-center designs, at times meaning different things to different people. Link load balancing is a term that refers to the management of the traffic that is initiated within a local network that is destined to go out through one of a set of WAN connections. The link load balancer sits at the LAN-WAN boundary. A good concrete example is a load-balancing firewall, that will redirect requests coming from the LAN segment that are destined to go out to the public Internet, by directing the traffic onto one of the various WAN connections. This is a method used to distribute network load onto the WAN network. Usually, the load balancing algorithms will rely on simple methods, such as round-robin, weighted round-robin or similar. 

However, once the industry started moving away from legacy load balancing to more modern Broadband Bonding solutions, the link load-balancing algorithms became more sophisticated, resulting in packet level load balancing or true bonding. These packet level link load balancers can accomplish finer granularity in aggregation, and therefore achieve higher efficiency. Packet level link load balancing also adds session continuity and protection for live sessions against any WAN link failures and problems. Both with legacy load-balancers and newer Broadband Bonding routers, the devices act as traffic cops, directing requests generated within the local network and destined for the Internet.

Server load balancing, on the other hand, is something completely different. It aims to distribute server inquiries coming inbound from the public Internet to the servers in a data center. In the server load balancing use case, usually a data center will have a various number of servers in a server farm to service various simultaneous client server inquiries coming from the Internet. A simple example is a web server, serving the website pages that various users in the Internet are requesting from their computers. The server load balancer, sitting in front of the server farm, and connected to the WAN (possibly through a single WAN link), will distribute the requests for the servers onto the various available servers in the data center. The goal with server load balancers is to distribute the workload between servers and avoid overwhelming any one particular server with the simultaneous requests that are coming inbound from the Internet.

So, both link load balancing and server load balancing are methods to distribute traffic, however, their functionalities are fundamentally completely different. A link load balancer deals with the Internet traffic requests generated from a local LAN and distributes that load over various WAN links, whereas the server load balancer deals with incoming server requests and distributes that load over various servers (possibly over a single WAN link).

Cahit Akin, CEO, Mushroom Networks, Inc. 

Mushroom Networks is the provider of Software Defined WAN and load balancing solutions capable of Broadband Bonding that enables self-healing WAN networks that route around network problems such as latency, jitter and packet loss.



© 2004 – 2024 Mushroom Networks Inc. All rights reserved.

Let’s chat. Call us at +1 (858) 452-1031 or fill the form: