|
|
SLB and Layer 3 switching refers to a class of high-performance switches optimized for the campus LAN or intranet, providing both wirespeed Ethernet routing and switching services as well as SLB.
You need SLB to handle the ever-increasing number of visitors and data on your enterprise Web servers. These web servers must provide secure and reliable Web and application hosting services to your Internet or intranet clients.
The simplest and most often enabled method to address increasing Web site traffic and reliability requirements involves multiple Web servers and SLB switches. Each server has identical Web site content and usually runs mirroring software to maintain the duplicate content across all the servers in the server farm.
The SLB switch redistributes the requests or hits from clients evenly among all the servers in the server farm and achieves a balanced load for each server in the server farm. Plus, all physical servers appear as one virtual server, resulting in only a single IP address and a single universal resource locator (URL) required for an entire server farm. By distributing client requests across a server farm, SLB optimizes responsiveness and system capacity while ensuring scalability. SLB also dramatically improves site reliability by allowing individual servers to fail or be taken off line without impacting content flow to the users.
A SLB switch also performs the following three major functions:
Compared to other routers, SLB and Layer 3 switches process more packets faster by using application-specific integrated circuit (ASIC) hardware instead of microprocessor-based engines. Layer 3 switch routers also improve network performance with two software functionsroute processing and intelligent network services.
Figure 1-1 shows how you can use the Catalyst 4840G SLB switch in an enterprise network with server load balancing (SLB).

SLB intelligently load balances TCP/IP traffic across multiple servers. (See Figure 1-1.) It appears as one "virtual" server to the requesting clients. All traffic is directed toward a virtual IP address (virtual server) via Domain Name System (DNS). Those requests are distributed over a series of real IP addresses on servers (real servers). A virtual IP address is an address that is in DNS and most likely has a domain name. A real IP address is physically located on a real server behind SLB. SLB provides the following benefits:
For a detailed explanation of server load balancing see "Server Load Balancing."
The Catalyst 4840G SLB switch uses the following interfaces:
This section lists the server load balancing (SLB) feature list.
Modes of Load balancing
Server Load Balancing Algorithms
![]() |
Note You can configure a real server with an optional weight relative to other real servers in the server farm. |
SLB Features
Server/Application Availability detection
Protocols Supported by Load Balancing
This section lists Catalyst 4840G SLB switch software features.
Layer 1 Features
Layer 2 Bridging Features
Virtual LAN (VLAN) Features
Layer 3 Routing, Switching, and Forwarding
Supported Routing Protocols
Fast EtherChannel (FEC) Features
Gigabit EtherChannel (GEC) Features
Additional Protocols and Features
This section lists the network management feature list.
Configuration Related Feature
Performance Monitoring
Security Features
Management
SNMP MIB Support
This section briefly describes key features supported in the SLB switching software and includes the following sections:
Server load balancing provides for increasing Web site traffic and reliability using multiple real Web servers. Each SLB real server has identical Web site content and usually runs mirroring software to maintain the duplicate content across all the servers in the server farm. The SLB switch tracks network sessions and server load conditions in real time, directing each session to the most appropriate server. All physical servers appear as one virtual server, requiring only a single IP address and a single URL for an entire server farm.
The SLB switch can be configured to operate in one of two redirection modes: directed mode or dispatched mode. In directed mode, the virtual server can be assigned an IP address that is not known to any of the real servers. SLB software translates packets exchanged between a client and real server, translating the virtual server IP address to a real server address through network address translation (NAT). In dispatched mode, the virtual server address is known to the real servers, and SLB redirects packets to the real servers at the MAC layer.
For SLB examples and configuration information, see "Server Load Balancing."
There are two algorithm choices that you can use for server load balancing or determining which server will receive a new connection request.
![]() |
Note You can configure a real server with a weight relative to other real servers in the server farm, using the weight real server configuration command. |
SLB software uses this handshake mechanism to identify the beginning and end of individual flows and, therefore, can make the appropriate decisions as the conversation progresses. This section describes this process.
The synchronize sequence numberor SYNis set in the first packet sent by the client to the IP address of the virtual server; in reality, this is the address of the SLB switch in behalf of all the real servers, so the packet is sent to the SLB software. The SLB software maintains tables of all connections that are active and since this is a new connection (identified by source and destination IP address as well as port numbers), the software creates a connection entry for this flow. The next step is to decide which real server will service this connection request, based on configuration of the particular load-balancing algorithm. After this decision has been made, the SLB software then forwards the packet to the appropriate server by changing the destination IP or MAC address to that of the real server and sending the packet out to the server. The real server receives the packet, fetches the requested Web page, and sends the response back to SLB software, which then sends the packet back towards the original source workstation.
If multiple pages are requested by this user session, then this process continues until the user asks for no more data and requests the TCP connection to be closed. At this point, the SLB switch flushes its internal table of connection status for this particular session, and then calculates the server to receive the next new connection based on the load-balancing algorithm.
Devices that are running the HSRP detect a failure by sending and receiving multicast User Datagram Protocol (UDP) "hello" packets. When HSRP detects that the designated active router has failed, the selected backup router assumes control of the HSRP group MAC and IP addresses. (You can also select a new standby router at that time.)
SLB switching software supports HSRP over 10/100 Ethernet, Gigabit Ethernet, FEC, GEC, and BVI (Bridge-Group Virtual Interface) to ensure that traffic from the SLB servers toward the clients goes through the active device using the HSRP IP address as a default gateway.
To configure the HSRP, see the "SLB Hot Standby Router Protocol" section.
This enhancement provides SLB with a one-to-one stateful or idle backup scheme. Only one instance of SLB is handling client or server traffic at a given time and, at most, one backup platform for each active SLB switch. The state transfer between the primary and backup switches flows over the Content Aware Services Architecture (CASA) protocol, and the configuration of primary and backup switches is controlled at a lower level by HSRP groups and priorities.
To configure stateful backup, see the "SLB Stateful Backup" section.
To configure the EtherChannel, see "Configuring EtherChannel."
Gigabit EtherChannel uses a source-destination IP address load-balancing scheme for the two Gigabit Ethernet ports in the channel group. Each channel group has its own IP address.When a packet is queued to exit out of the port channel interface, the last two bits of the IP source and destination address determine which interface in the channel the packet takes.
As with all EtherChannel technologies, the traffic load is shared across all links within the bundled ports; convergence occurs within one second of a Gigabit EtherChannel failure.
To configure the EtherChannel, see "Configuring EtherChannel."
Quality of service (QoS) includes technologies such as Resource Reservation Protocol (RSVP) and weighted round-robin (WRR), which help control bandwidth, network delay, jitter, and packet loss in networks that become congested. In the SLB switch, QoS-based forwarding sorts traffic into a small number of classes and marks the packets accordingly. The QoS identifier provides specific treatment to traffic in different classes, so that different quality of service is provided to each class.
Frame and packet scheduling and discarding policies are determined by the class to which the frames and packets belong. For example, the overall service given to frames and packets in the premium class will be better than that given to the standard class; the premium class is expected to experience lower loss rate or delay.
The SLB switch has QoS-based forwarding for IP traffic only. The implementation of QoS forwarding is based on local administrative policy and IP precedence. The mapping between the IP precedence field and the QoS field determines the delay priority of the packet.
SLB and Layer 3 switching software supports up to 255 VLANs per system. Because routing will take place, each VLAN is assumed to terminate at the SLB switch. Since this might not necessarily be the case, integrated routing and bridging (IRB) is also supported. To configure IRB, see the "About Integrated Routing and Bridging" section.
To configure VLANs, you define a subinterface at the interface, define a bridge group, and map a VLAN to the subinterface.
To configure VLANs, see the "About Virtual LANs" section.
The redundancy of Cisco IOS software provides key network features, such as Hot Standby Router Protocol (HSRP) for both routing and SLB, routing protocol convergence with Routing Information Protocol (RIP), Open Shortest Path First (OSPF), Enhanced Interior Gateway Routing Protocol (EIGRP), Fast EtherChannel, and load sharing across equal cost Layer 3 paths and spanning trees (for Layer 2 based networks).
SLB switching software supports the first four remote monitoring (RMON) groups.
RMON is a network management protocol for gathering network information and monitoring traffic data within remote LAN segments from a central location. RMON allows you to monitor all nodes and their interaction on a LAN segment. RMON, used in conjunction with the SNMP agent in the switch, allows you to view both the traffic that flows through the switch and segment traffic not necessarily destined for the switch. SLB switching software combines RMON alarms and events with existing MIBs so you can choose where monitoring will occur.
To configure RMON, refer to the Cisco IOS Configuration Fundamentals Configuration Guide .
Specifically, local or unroutable traffic is bridged among the bridged interfaces in the same bridge group, while routable traffic is routed to other routed interfaces or bridge groups.
Layer 3 switching software supports IRB for IP only.
Here are some examples of when to use IRB:
To configure IRB, see the "Configuring IRB" section.
Spanning-Tree Protocol is a standardized technique for maintaining a network of multiple bridges or switches. When the topology changes, Spanning-Tree Protocol transparently reconfigures bridges and switches to avoid the creation of loops by placing ports in a forwarding or blocking state. Each VLAN is treated as a separate bridge and a separate instance of Spanning-Tree Protocol is applied to each.
Spanning-Tree Protocol parameters are set for each VLAN. For each spanning-tree instance, you configure a set of global options with a set of port parameters. The port parameter list contains only ports that are members of a given VLAN. A maximum of 64 spanning-tree instances are supported, one for each VLAN.
To configure Spanning-Tree Protocol, see the Cisco IOS Bridging and IBM Networking Configuration Guide.
Many of the Cisco IOS routing protocol features, such as route redistribution and load balancing over equal cost paths (for OSPF and EIGRP) are supported. Configuration of these routing protocols is identical to the configuration methods currently employed on all Cisco routers.
To configure network and routing protocols, see "Configuring Networking Protocols."
Cisco Discovery Protocol (CDP) is a device-discovery protocol that is both media and protocol independent. CDP is available on all Cisco products, including routers, switches, bridges, and access servers. Using CDP, a device can advertise its existence to other devices and receive information about other devices on the same LAN. CDP enables Cisco products to exchange information with each other regarding their MAC addresses, IP addresses, and outgoing interfaces. CDP runs over the data link layer only, thereby allowing two systems that support different network-layer protocols to learn about each other.
Each device configured for CDP sends periodic messages to a multicast address. Each device advertises at least one address at which it can receive Simple Network Management Protocol (SNMP) messages.
CEF manages route distribution and forwarding by distributing routing information from the route processor to the individual Ethernet interfaces. This technology, used within the Intranet, provides scalability in large campus core networks. CEF provides Layer 3 forwarding based on a topology map of the entire network, resulting in high-speed routing table lookups and forwarding.
One of the key benefits of CEF in Layer 3 switching is its routing convergence. Since the forwarding information base (FIB) is distributed to all interfaces, whenever a route goes away or is added, the FIB updates that information and provides it to the interfaces. Thus, route processor interrupts are minimized. The interfaces receive the new topology very quickly and reconverge around a failed link based on the routing protocol being used.
![]()
![]()
![]()
![]()
![]()
![]()
![]()
Posted: Thu Sep 28 15:25:17 PDT 2000
Copyright 1989-2000©Cisco Systems Inc.