|
|
This chapter provides an overview of ATM traffic management in general and describes the related configurable features on the ATM switch router.
![]() |
Note The information in this chapter is applicable to the Catalyst 8540 MSR, Catalyst 8510 MSR, and LightStream 1010 ATM switch. For detailed configuration information, refer to the ATM Switch Router Software Configuration Guide and the ATM Switch Router Command Reference publication. |
This chapter includes the following sections:
The traffic management features of the ATM switch router provide the following capabilities:
The congestion control capabilities of the ATM switch router support the following goals:
![]() |
Note The specific resource management capabilities of your ATM switch router are platform dependent. Consult the ATM Switch Router Software Configuration Guide for details. |
Because ATM networks are designed to carry many different types of traffic, traffic characteristics and QoS requirements of each virtual connection must be described, and delivery of the contract must be guaranteed within the resource allocation policies defined for the network.
These requirements are carried out in three phases:
1. Define the traffic and service contract.
2. Find an acceptable path for the connections.
3. Use hardware resources to honor the terms of the contract for the life of the connection.
The first of these two steps can be considered the connection setup phase, while the third step represents the data flow phase. These three phases and their supporting mechanisms are discussed in the following sections.
Table 10-1 shows which traffic and QoS parameters used on the ATM switch router for the setup of connections in the ATM Forum service categories.
| Attributes | ATM Layer Service Category | ||||
|---|---|---|---|---|---|
| CBR | VBR-RT | VBR-NRT | UBR | ABR | |
yes | yes | yes | yes | yes | |
n/a | yes | yes | n/a | n/a | |
MCR5 | n/a | n/a | n/a | optional (for UBR+) | yes |
ppCDV6 | optional | optional | no | no | no |
MCTD7 | optional | optional | no | no | no |
CLR8 | optional | optional | optional | no | no |
| 1Peak cell rate 2Cell delay variation tolerance 3Sustained cell rate 4Maximum burst size 5Minimum cell rate 6Peak-to-peak cell delay variation 7Maximum cell transfer delay 8Cell loss ratio |
When establishing the traffic and service contract, target values for QoS parameters can be used as criteria for the connection setup requirements. These values are either metrics (accumulated over multiple hops of a call) or attributes (a gating criterion that is not accumulated, but is checked at each interface). Maximum cell transfer delay (MCTD) and peak-to-peak cell delay variation (ppCDV) are metrics, while cell loss ratio (CLR) is an attribute.
Following are the parameters you can configure to define the service contract:
![]() |
Note The effect of the parameters you can configure with the connection traffic table depends upon the hardware model and feature card installed in your ATM switch router. Refer to the ATM Switch Router Software Configuration Guide for details. |
Configured parameters in the CTT are ignored for tag switching virtual connections; see the "CTT Rows" section.
Specification of nondefault traffic for a PVC requires configuring a CTT row. Rows used for PVCs are called stable rows
Requested traffic parameters for switched virtual connections (SVCs) are signaled in the setup and do not use the preconfigured values in the CTT. However, the CTT in a SVC setup provides a row identifier for use by the Simple Network Management Protocol (SNMP) or the user interface to read or display traffic parameters for SVCs. Thus, a CTT row index is dynamically created and stored in the connection-leg data structure for each flow of an SVC.
To make CTT management software more efficient, the CTT row-index space is split into rows allocated as a result of signaling (for SVCs) and rows allocated from the CLI and SNMP (for PVCs). Table 10-2 describes the row-index range for both.
| Allocated by | Row-Index Range |
|---|---|
ATOMMIB Traffic Descriptor Table and | 1 through 1,073,741,823 |
Signaling for virtual path and virtual channel link creation | 1,073,741,824 through 2,147,483,647 |
The CTT contains a set of well-known, predefined ATM CTT rows, described in Table 10-3. These rows cannot be deleted.
| CTT Row Index | Service Category | PCR (CLP0+1) | SCR (CLP0+1) | CDVT | Use |
|---|---|---|---|---|---|
1 | UBR | 7113539 | | None | Default PVP/PVC row index |
2 | CBR | 424 kbps | | None | CBR tunnel well-known virtual connections |
3 | VBR-RT | 424 kbps | 424 kbps | 50 | Physical interface and VBR-RT tunnel well-known virtual connections |
4 | VBR-NRT | 424 kbps | 424 kbps | 50 | VBR-NRT tunnel well-known virtual connections |
5 | ABR | 424 kbps | | None | |
6 | UBR | 424 kbps | | None | UBR tunnel well-known virtual connections |
Configuration Overview
Configuring the CTT row requires specifying a row index with parameter values for each of the service categories:
The ATM switch router uses no default values for these objectives; rather, they are unspecified, as shown in Table 10-4, until defined. If undefined, the objective is not considered in connection setup.
| Service Category | MaxCTD (clp0+1) | ppCDV (clp0+1) | CLR (clp0) | CLR(clp0+1) |
|---|---|---|---|---|
CBR | Undefined | Undefined | Undefined | Undefined |
VBR-RT | Undefined | Undefined | Undefined | Undefined |
VBR-NRT | | | Undefined | Undefined |
Configuration Overview
Configuring the default QoS objective table requires one or more of the following steps; each objective can have a defined or undefined value.
The default QoS objective table should be configured with the same values for an entire network.
If MBS or CDVT values are not explicitly specified in the CTT, the default values for those parameters on the interface are used in the contract. See the "Default CDVT and MBS" section.
Resource CAC (RCAC) uses the following information provided in the traffic contract for each direction of a requested connection:
1. Parameter values of the source traffic descriptorPCR, SCR, MCR, MBS, CDVT
2. The requested service category (the service category must be the same for both directions of a connection) or QoS parameters (CLR, CTD, CDV) or both service category and QoS parameters
RCAC is based on a proprietary Equivalent Bandwidth algorithm in which equivalent bandwidths are used as real constant bandwidths in a statistical time-division multiplexing (STM) CAC environment for fast calculation. The equivalent bandwidth (or effective bandwidth by some) of a source is defined to be the minimum capacity required to serve the traffic source so as to achieve a specified steady-state CLR, MCTD, and ppCDV for CBR/VBR and for the nonzero MCR portion of ABR/UBR+ connections.
Flexibility in the resource management framework is particularly important because it is not easy to fully anticipate the customer and service requirements of emerging applications on the ATM internets. Controlled link sharing makes a key contribution to this flexibility. Discussed in the "Controlled Link Sharing" section, controlled link sharing configuration allows the user to place maximum limits and minimum guarantees on the interface bandwidth dedicated to service categories.
The RCAC algorithm provides a set of administratively configurable parameters (controlled link sharing) that specifies one or both of the following:
Interface parameters used by RCAC are defined as follows:
Parameters used in the algorithm are defined as follows:
Input:
Traffic contract input: service_category, PCR, SCR, SCRMF, MBS, CDVT, MCR
tx_MaxCTD_obj, tx_MaxCTD_acc, tx_ppCDV_obj, tx_ppCDV_acc, CLR
output:CAC_accept - true, connection is accepted
false, connection is rejected
algorithm:
Based on service category of the proposed connection:
service_category == CBR:
if ((CLR >= R_CLR_CBR) &&
(tx_MaxCTD_obj >= R_MaxCTD_CBR + tx_MaxCTD_acc) &&
(tx_ppCDV_obj >= R_ppCDV_CBR + tx_ppCDV_acc) &&
(PCR <= MAXCR) &&
(PCR <= MAX_PCR_CBR) &&
(CDVT <= MAX_CDVT_CBR) {
CBR_EQ_BW = PCR
if (CBR_EQ_BW > ACR_CBR)
CAC_accept = false
else
CAC_accept = true
} else
CAC_accept = false
service_category == VBR-RT:
if ((CLR >= R_CLR_VBR) &&
(tx_MaxCTD_obj >= R_MaxCTD_VBR + tx_MaxCTD_acc) &&
(tx_ppCDV_obj >= R_ppCDV_VBR + tx_ppCDV_acc) &&
(PCR <= MAXCR) &&
(PCR <= MAX_PCR_VBR) &&
(SCR <= MAX_CR) &&
(SCR <= MAX_SCR) &&
(MBS <= MAX_MBS) &&
(CDVT <= MAX_CDVT_VBR) {
SCRM = SCRMF * (PCR - SCR) /* SCRMF = [0,...,1] w. default=0.01 */
VBR_BW = SCR + SCRM
if (VBR_BW > ACR_VBR)
CAC_accept = false
else
CAC_accept = true
} else
CAC_accept = false
service_category == VBR-NRT:
if ((CLR >= R_CLR_VBR) &&
(PCR <= MAXCR) &&
(PCR <= MAX_PCR_VBR) &&
(SCR <= MAX_CR) &&
(SCR <= MAX_SCR) &&
(MBS <= MAX_MBS) &&
(CDVT <= MAX_CDVT_VBR) {
SCRM = SCRMF * (PCR - SCR) /* SCRMF = [0,...,1] w. default=0.01 */
VBR_BW = SCR + SCRM
if (VBR_BW > ACR_VBR)
CAC_accept = false
else
CAC_accept = true
} else
CAC_accept = false
service_category == ABR:
if ((PCR <= MAXCR) &&
(PCR <= MAX_PCR_ABR) &&
(MCR <= MAXCR) &&
(MCR <= MAX_MCR_ABR) &&
(CDVT <= MAX_CDVT_ABR) &&
(ABR_count + UBR_count + 1 <= MAX_BE_CONNS)){
CBR_EQ_BW = PCR
if (CBR_EQ_BW > ACR_CBR)
CAC_accept = false
else
CAC_accept = true
} else
CAC_accept = false
service_category == UBR:
if ((PCR <= MAXCR) &&
(PCR <= MAX_PCR_UBR) &&
(MCR <= MAXCR) &&
(MCR <= MAX_MCR_UBR) &&
(CDVT <= MAX_CDVT_UBR) &&
(ABR_count + UBR_count + 1 <= MAX_BE_CONNS)){
CBR_EQ_BW = PCR
if (CBR_EQ_BW > ACR_CBR)
CAC_accept = false
else
CAC_accept = true
} else
CAC_accept = false
Note that the above algorithm does not describe the derivation of available cell rate (ACR) per service category. In the absence of controlled link sharing, this algorithm applies to each direction:
(.95 * MAXCR - sum of equivalent bw allocated to all connections on interface)
If controlled link sharing is configured, it then establishes limits on ACR for all guaranteed bandwidth or a service type (the maximum case) and limits on the encroachment of other service types (the minimum case).
![]() |
Note CAC can also be affected by the threshold group configuration for SVCs. See the "Threshold Groups" section. |
The sustained cell rate margin factor is configured globally. The remaining CAC-related features are configured on a per-interface basis.
![]() |
Note CAC is bypassed for tag switching virtual connections; see the "Resource Management CAC" section. |
bandwidth = (SCRMF * (PCR-SCR))/100 + SCR
Configuration Overview
You can change the default sustained cell rate margin factor for admitting VBR connections using a global configuration command. Configuring this value as 100 causes CAC to treat VBR like CBR.
Controlled link sharing is a set of parameters used to specify a variety of minimum and maximum values for guaranteed bandwidth that can be allocated on an interface. These parameters allow fine-tuning of the CAC functions on a per-interface and per-direction (receive and transmit) basis. The relationship among these parameters, when defined, is shown in Table 10-5.
min(CBR) + min(VBR) + min(ABR) + min(UBR) <= 95 percent |
min(CBR) <= max(CBR) <= 95 percent |
min(VBR) <= max(VBR) <= 95 percent |
min(CBR) <= max(AGG)1 <= 95 percent |
min(VBR) <= max(AGG) <= 95 percent |
max(CBR) <= max(AGG) <= 95 percent |
max(VBR) <= max(AGG) <= 95 percent |
min(ABR) <= max(ABR) <= 95 percent |
min(UBR) <= max(UBR) <= 95 percent |
min(ABR) <= max(AGG) <= 95 percent |
min(UBR) <= max(AGG) <= 95 percent |
max(ABR) <= max(AGG) <= 95 percent |
max(UBR) <= max(AGG) <= 95 percent |
| 1The configured maximum for all guaranteed bandwidth. |
Configuration Overview
Configuring controlled link sharing on an interface requires the following steps:
Step 2 Take one or more of the following steps:
a. Specify a maximum percent of interface bandwidth to be used for guaranteed bandwidth connections. You can configure values for both the receive and transmit directions.
b. Specify the maximum percent of interface bandwidth to be used for any of the service categories. You can configure values individually for each service category and for both the receive and transmit directions.
c. Specify the minimum percent of interface bandwidth to be used for any of the service categories. You can configure values individually for each service category and for both the receive and transmit directions.
Changing the outbound link distance from its default of zero can affect the SVC requests accepted. For example, you might want to discourage use of a transcontinental link by configuring a higher propagation delay.
Configuration Overview
Configuring the outbound link distance requires the following steps:
Step 2 Specify an outbound link distance in kilometers.
Configuration Overview
Configuring the limits of best-effort connections requires the following steps:
Step 2 Specify the maximum number of best-effort connections to allow.
Configuration Overview
Configuring the maximum traffic parameters on an interface requires the following steps:
Step 2 Do one or more of the following steps for the receive direction, transmit direction, or both, on the interface:
a. Specify a maximum PCR value in kbps for any of the CBR, VBR, ABR, and UBR service categories.
b. Specify a maximum SCR value in kbps.
c. Specify a maximum CDVT value (expressed in 2.72 microsecond cell times) for any of the CBR, VBR, ABR, and UBR service categories.
d. Specify a maximum MBS value in number of cells.
e. Specify a maximum MCR value in kbps for the best-effort service categories (ABR and UBR).
The transit VPthe VP that connects the tunnel across the service provider networkmust also have a service category. Table 10-6 shows the service category of the shaped VP tunnel (always CBR), the service categories you can configure for transported virtual connections, and a suggested transit VP service category for the tunnel.
| Shaped VP Tunnel Service Category | VCC Service Category | Suggested Transit VP Service Category |
|---|---|---|
CBR | CBR | CBR |
CBR | VBR | CBR or VBR |
CBR | ABR1 | CBR or VBR |
CBR | UBR | Any service category |
| 1We recommend ABR only if the transit VP is set up so that congestion occurs at the shaped tunnel, not in the transit VP. |
The default for physical interfaces and hierarchical VP tunnels is to allow virtual connections of any service category to transit the interface. However, interface service category configuration can be used explicitly to allow or prevent virtual connections of specified service categories to migrate across the interface.
Configuration Overview
The restrictions that apply to interface service category support are summarized as follows:
Configuring interface service category support requires the following steps:
Step 2 Specify which traffic categories to deny on the interface.
Step 3 Specify which traffic categories to permit on the interface.
You can deny or permit any of the CBR, VBR-RT, VBR-NRT, UBR, and ABR traffic categories.
The interface overbooking feature allows the available equivalent bandwidth of an interface to exceed the maximum cell rate (MaxCR) or physical line rate on ATM and inverse multiplexing for ATM (IMA) interfaces. The available equivalent bandwidth is by default limited by the MaxCR. Increasing the available equivalent bandwidth beyond the MaxCR allows the configuration of more connections on an interface than its physical bandwidth would allow. Overbooking allows more flexibility when configuring an interface when the traffic over the interface will be less than the MaxCR.
The following restrictions apply to interface overbooking:
![]() |
Caution Overbooking can cause interface traffic to exceed the guaranteed bandwidth that the switch can provide. |
![]() |
Note Interface overbooking configuration is not supported on systems with FC-PCQ installed. |
Configuration Overview
Configuring interface overbooking requires the following steps:
Step 2 Shut down the interface.
Step 3 Configure interface overbooking for CAC as a percentage of the maximum equivalent bandwidth.
Step 4 Reenable the interface.
When framing overhead is considered, MaxCR is less than the unframed rate, and some previously configured connections might not be established. The MaxCR differs by interface type and framing mode, and whether framing overhead is configured. Refer to the ATM Switch Router Software Configuration Guide for details.
Configuration Overview
To configure framing overhead for CAC you enter a single command to enable the feature. In some circumstances enabling framing overhead might reduce the maximum guaranteed service bandwidth supported on a direction of an interface to below the current allocation. In that event an option is available to force the configuration to take effect.
Figure 10-1 shows the relationship between these mechanisms, which are discussed in the following sections.

For most applications, when one or more cells are dropped by the network, the corresponding packet becomes corrupted and useless. This results in the need to retransmit the many cells that comprise that packet, and leads to exacerbated congestion. For example, loss of a cell from an IP over ATM packet (RFC 1577) might require resending 192 ATM cells, given an MTU of 9 KB.
To maximize the number of complete delivered packets, the ATM switch router implements a unique tail packet discard and early packet discard (TPD/EPD) scheme that intelligently and selectively discards cells belonging to the same packet. These congestion control mechanisms reduce the effects of fragmentation and make the ATM switch router essentially emulate a packet switch, which discards entire packets.
UPC on the ATM switch router checks the following parameters:
On systems equipped with hardware to support the dual leaky bucket, there are two policers for VBR connections. One policer uses PCR and CDVT, while the other policer uses SCR and MBS.
When a cell is found to be nonconforming, one of the following actions can be triggered:
The CLP bit in the ATM cell header can be used to generate different priority cell flows within a virtual connection. When UPC sets CLP = 1, the cell is more likely to experience loss during congestion. This allows a selective cell discarding scheme to be implemented to deal with network congestion.
Configuration Overview
Configuring the default UPC behavior requires the following steps:
Step 2 For SVCs, select an interface, enter interface configuration mode, and specify tag or drop.
You can specify CDVT or MBS for PVCs through a connection traffic table row. If no CDVT or MBS is specified in the row, then a per-interface, per-service category default is applied for purposes of UPC on the connection.
![]() |
Note CDVT cannot be signaled. Therefore, the defaults specified on the interface apply for SVCs and the destination leg of a soft PVC. |
Configuration Overview
Configuring the default CDVT and MBS on an interface requires the following steps:
Step 2 Specify a CDVT default value for a service category. You can repeat this step for additional service categories you want to configure.
Step 3 Specify an MBS default value for a service category. You can repeat this step for additional service categories you want to configure.
The specific features available depend upon the hardware:
The size of the VBR-NRT queue and ABR/UBR queues is determined by using the oversubscription factor (OSF) in the following formula:
size (vbr-nrt) = .25 * ((osf * 2048) - DefaultSize (cbr) - DefaultSize (vbr-rt)) size (abr-ubr) = .75 * ((osf * 2048) - DefaultSize (cbr) - DefaultSize (vbr-rt))
When you configure the oversubscription factor, you are changing the default values. Refer to the
ATM Switch Router Command Reference publication for details.
Configuration Overview
Configuring the oversubscription factor requires the following steps:
Step 2 To have the change take effect and resize the queues, save the running configuration to the startup configuration and restart the ATM switch router.
When you configure the service category limit requirements, you are specifying a new value to use rather than the default. To do so requires just one global configuration command in which you specify the limit in number of cells for a queue type. You repeat this command for each additional queue type for which you want to configure a maximum.
Configuration Overview
Configuring the maximum queue size for an interface requires the following steps:
Step 2 Specify an output queue maximum size for a service category. Repeat for additional service categories you want to configure.
The queue thresholds can be configured for the service categories on each interface queue.
The following queue thresholds can be configured per interface queue:
Configuration Overview
Configuring the interface queue threshold per service category requires the following steps:
Step 2 Specify an output percent for the EFCI marking threshold for a service category. You can repeat this step for additional service categories on the interface.
Step 3 Specify an output percent for the discard threshold for a service category. You can repeat this step for additional service categories on the interface.
Step 4 Specify an output threshold percent for RR marking on ABR connections.
![]() |
Note The threshold groups feature depends upon the hardware model and feature card installed in the ATM switch router. In addition, the total number of threshold groups available on the ATM switch router is platform dependent. For details, refer to the ATM Switch Router Software Configuration Guide. |
Each threshold group has a set of eight regions, and each region has a set of thresholds. When these thresholds are exceeded, cells are dropped to maintain the integrity of the shared memory resource.
The initial default configuration of per-VCC queuing on the ATM switch router has all the connections of a service category assigned to one threshold group. However, the assignment of service categories to threshold groups is configurable, with the following restrictions:
![]() |
Note The configuration of threshold groups is static, not dynamic. |
Table 10-7 lists the threshold group configuration parameters.
| Group | Maximum Cells | Maximum Queue Limit | Minimum Queue Limit | Mark Threshold | Discard Threshold | Use |
|---|---|---|---|---|---|---|
1 | 65535 | 63 | 63 | 25% | 87% | CBR |
2 | 65535 | 127 | 127 | 25% | 87% | VBR-RT |
3 | 65535 | 511 | 31 | 25% | 87% | VBR-NRT |
4 | 65535 | 511 | 31 | 25% | 87% | ABR |
5 | 65535 | 511 | 31 | 25% | 87% | UBR |
6 | 65535 | 1023 | 1023 | 25% | 87% | well-known virtual connections |
![]() |
Note If the max- and min-queue-limits are equal, the queue size does not reduce as the group congests. |
When congestion is in the range of 0 cells (uncongested) to 1/8th full, the connection queues are limited to max-queue-size. When congestion is in the range of 7/8ths full to completely full, the connection queues are limited to min-queue-size.
Configuration Overview
Configuring the threshold group parameters requires the following steps:
Step 2 Specify the maximum number of cells queued for all connections that are members of the threshold group.
Step 3 Specify the percent at which the per-connection queue is to be considered full for purposes of CLP discard and EPD.
Step 4 Specify the maximum per-connection queue limit (in number of cells) for the threshold group.
Step 5 Specify the minimum per-connection queue limit (in number of cells) for the threshold group.
Step 6 Specify a name to associate with the threshold group (optional).
Step 7 Specify the percent at which the per-connection queue is considered full for EFCI (on all connections) and RR marking (on ABR connections).
![]() |
Note RR marking is not supported on all platforms. |
You can repeat these steps for any additional service category thresholds you want to configure.
The ATM switch router implements two methods to indicate and control congestion:
EFCI and RR marking involve two important functions: detecting incipient congestion and providing selective feedback to the source. As with any feedback mechanism, congestion control schemes operate best when the latency of the feedback path is minimized. Thus RR mode, because of its ability to use backward RM cells to send the congestion indicator rather than relying on the destination end system to reflect it back, can greatly reduce feedback delays and deliver better performance than the EFCI mode.
The two modes can be used independently or in combination to support ABR traffic, and thresholds can be set to indicate when EFCI or RR marking should occur.
The ABR congestion notification mode is used to change the type of notification used on ABR connections to alert the end station of congestion. ABR mode configuration determines whether ABR uses EFCI marking, RR marking, or both, for forward and backward RM cells used to control ABR congestion. On systems that support RR mode, the ATM switch router uses that mode by default.
![]() |
Note The ABR congestion notification mode feature depends upon the feature card installed in the ATM switch router. Systems that do not support this feature use only the EFCI mode. |
Configuration Overview
When you configure the ABR congestion notification mode you affect all ABR connections. To do so requires just one global configuration command.
Output scheduling determines which queued cell is chosen to be transmitted out an interface during a cell time, which is dependent upon the characteristics of the physical interface. The goal of output scheduling is to ensure that bandwidth guarantees are met and extra bandwidth is fairly shared among connections.
![]() |
Note No per-VCC or per-VPC shaping is performed on systems equipped with FC-PCQ. On systems equipped with FC-PFQ, as well as the Catalyst 8540 MSR, all transit CBR VCCs and VPCs are shaped. |
An additional benefit of output scheduling is the ability to shape traffic. This capability can be important when connecting across a public UNI to a public network, because many such networks base their tariffs on the maximum aggregate bandwidth. Traffic shaping and traffic policing are complementary functions, as illustrated in Figure 10-2.

Traffic shaping is a feature of shaped and hierarchical VP tunnels. See the "VP Tunnels" section.
The following configurable features on the ATM switch router are used for output scheduling:
Configuration Overview
Interface output pacing is disabled by default on the ATM switch router. Configuring the interface output pacing requires the following steps:
![]() |
Note There are restrictions on which interface types can be configured for output pacing; refer to the ATM Switch Router Command Reference publication for details. |
Step 2 Specify the output pacing limit as a bit rate in kbps.
This configuration does not take effect if the amount of bandwidth allocated to CBR and VBR connections in the transmit direction on the interface is greater than the configured output pacing value.
In scheduling the next cell to be transmitted from a port, RS has first call on supplying an eligible cell. If RS does not have one, then the WRR scheduler chooses a service class with an output virtual connection ready to transmit, and finally a virtual connection within the service class is selected.
![]() |
Note Scheduler and service class configuration depends upon the feature card installed in the ATM switch router. On systems that do not support scheduler and service class configuration, output scheduling is done on a strict priority basis; refer to the ATM Switch Router Software Configuration Guide for details. |
On the ATM switch router, the ATM service categories are mapped statically to service classes, as shown in Table 10-8; service class 2 has the highest scheduling priority.
| Service Category | Service Class |
|---|---|
VBR-RT | 2 |
VBR-NRT | 3 |
ABR | 4 |
UBR | 5 |
A different set of service classes is used for tag switching virtual connections; see the "Tag Switching CoS" section.
The first scheduling decision is made based on whether any rate-scheduled cell is ready (as decided by the timewheel rate scheduler for an interface). Whether a virtual connection uses the rate scheduler is not user-configurable.
Table 10-9 lists the cell rates that are guaranteed by the rate scheduler for each service category.
| Service Category | Cell Rate Guaranteed |
|---|---|
CBR | PCR |
VBR-RT | SCR |
VBR-NRT | SCR |
ABR | nonzero MCR (if specified) |
UBR | MCR (if specified) |
If the timewheel RS does not have an output virtual circuit ready to transmit, the WRR scheduler becomes active to pick out a virtual circuit to transmit a cell. The WRR scheduler uses the interface bandwidth left over after guaranteed cell service to transmit cells. Thus, an output virtual circuit of a service category other than CBR can be serviced by both the rate scheduler and the WRR scheduler. A CBR output virtual circuit cannot be serviced by the WRR scheduler because its PCR is already guaranteed by the rate scheduler. (Any additional cell transmission by the WRR scheduler out of that output virtual circuit is likely to arrive too soon at the next switch and might be policed.)
The following service categories can be serviced by the WRR scheduler:
The combined result of the two schedulers is illustrated in Figure 10-3.

Each service class is assigned a weight. These weights are configurable, in the range of 1 to 15. The default weighting is {15,2,2,2} for classes {2,3,4,5}, respectively. The weighting is not modified dynamically.
Within service classes, individual output virtual circuits are also weighted, again in the range of
1 to 15. A standard weight (2) is assigned to all PVCs in a service class. Optionally, PVCs can be configured with a specific weight per half-leg (applying to the transmit output virtual circuit weight). SVCs take the value 2.
Configuration Overview
Configuring the service class weights on an interface requires the following steps:
Step 2 Specify a service class and a weight value. You can repeat this step for additional service classes.
![]()
![]()
![]()
![]()
![]()
![]()
![]()
Posted: Wed Sep 27 13:34:34 PDT 2000
Copyright 1989-2000©Cisco Systems Inc.