There are varieties of algorithms and techniques that can be used to intelligently load balance client access requests across server pools. The technique chosen will rely on the kind of application or service being served and the status of the servers and network at the time of the request. The strategies outlined below will be used in combination to determine the best server to service new requests. The present level of requests to the load balancers usually determines which methodology is used. When the load is low then one of the simple load balancing strategies will suffice. In times of high load, lots of advanced strategies are used to make sure an even distribution of requests.
What are load balancing algorithms?
Effective load balancers determine which device is best able to process an incoming data packet. Doing so needs algorithms programmed to distribute loads in a specific approach.
Algorithms vary widely, reckoning on whether a load is distributed on the application or network layer. Algorithm selection impacts the effectiveness of load distribution mechanisms and, consequently, business and performance continuity.
Here we will be discussing the advantages and disadvantages of many videos used algorithms found in both application and network layer load balancing solutions.
1. Static Algorithm
Static Algorithms are created for those systems which have very low variations in load. In Static Algorithm, the whole traffic is equally divided amongst the servers. This algorithm requires deep information of server resources for good performance of the processors which is determined at the starting of the implementation.
However, the choice of load shifting does not rely on the current state of the system. There is one major drawback of Static Load Balancing Algorithm that is load balancing tasks work solely after they are created, it could not be enforced to other devices for load balancing.
Round Robin
A simple process of load balancing servers multiple identical servers are organized to provide the constant services. All are configured to use the constant Internet domain name; however each has a distinctive IP address. A DNS server has a list of all the distinctive IP addresses that are related with the Internet domain name. When requests for the IP Address related with the Internet domain name are received, the addresses are returned in a rotating consecutive manner.
2. Dynamic Algorithm
Dynamic Algorithm initial searches the lightest server in the entire network and offers it the preference for load balancing. It needs real time communication with the network that can facilitate in increasing the traffic of the system. Here, the current state of the system is used to manage the load.
The specialty of Dynamic Algorithm is to create load transfer choices to the actual current system state. In this system, processes are allowed to move from a extremely utilized machine from underutilized machine in real time.
Least Connection Algorithm
This algorithm decides the load distribution on the premise of connections present on a node. The quantity increases when a new connection is established and decreases when connection finishes or time out. The nodes with least number of connections are selected first.
The following table (Table I) shows the list of parameters with their description:
TABLE: LIST OF PARAMETERS
PARAMETERS | DESCRIPTION |
Nature Determines | The behavior of the algorithm i.e. whether static or dynamic. |
Overhead | Determines the amount of overheads such as migration of tasks, inter-process communication etc. involved while implementing the algorithm. This must be least so that algorithm can operate effectively. |
Throughput | It is used to calculate the no. of tasks whose execution has been completed. It must be greater for higher performance of algorithm. |
Process Migration | It is the time period when migration of the jobs is done from one node to other. It should be least to increase the system performance. |
Response Time | The amount of time taken to complete a task by a load balancing algorithm is known as Response Time. It should be minimized. |
Resource Utilization | It is used to check the utilization of resources, should be optimized in order to get good performance. |
Scalability | It is expressed as the ability of an algorithm to offer optimized result with any finite number of nodes. |
Fault Tolerant | It permits any algorithm to operate endlessly in the event of some failure; even a small failure can decrease the performance of an algorithm. |
Waiting Time | It is defined as the time period spends in the ready queue, less the waiting time better the performance. |
Performance | It is used to check the efficiency of the system. |
3. Failure Mechanism
● Priority group activation
Priority Group Activation feature is one of the very nice feature available on F5 BIGIP appliance. It allows you to organize your pool members into different priority groups which get activated/deactivated automatically, based on the number of pool members online (servicing requests). In simpler terms, priority group activation permits an individual to setup hot-standby pool members.
● F5-Fallback Host
By implementing the function of Fallback Host in BIG-IP, BIG-IP can send HTTP Redirect to the client when all Pool members go down. By setting this redirect destination to the sorry server, you can notify the user that the service cannot be provided. Sorry server is a server that returns alternative content indicating that the service cannot be provided as a respoint when the service cannot be provided due to server failure.
Thrilok Thallapelly is a senior network consultant who has dedicated his career to the field of networking. He completed Bachelor's degree in Technology in Computer Science from a reputed university in the country. He has always been fascinated by the world of networking and pursued his passion by learning everything he could about routing and ...
More... | Author`s Bog | Book a Meeting