cc/td/doc/product/wanbu/ses/1_0_10
hometocprevnextglossaryfeedbacksearchhelp
PDF

Table of Contents

ATM Soft Permanent Virtual Circuits

ATM Soft Permanent Virtual Circuits

ATM Soft Permanent Virtual Circuits (SPVCs), as implemented by the SES node, are described in the following topics:


Note   Prior to adding an ATM SPVC to the network, ATM signaling must be configured. Refer to Chapter 3, "ATM Signaling and Switched Virtual Circuits".

Overview

SPVC connection provisioning can be done through an external Network Management System, such as Cisco Wan Manager (NMS), or via the command line interface. Only 2-end provisioning is supported in this release. When provisioning is done on an NMS, an SNMP set request is sent to both the SPVC endpoints terminating at the CPE ports.

SPVC provisioning includes:

Figure 5-1 shows an example of an end-to-end SPVC connection provisioning and establishment.


Figure 5-1: SPVC Provisioning


The end-points are reserved on the service modules prior to routing. The connection request is forwarded to the PNNI Controller. The SPVC is routed from the master endpoint to the slave endpoint.

The provisioning of multi-service SPVC on a switch and controller is performed as follows:

    1. NMS sends an SNMP Set to the PNNI Node to add a slave SPVC end-point.

    2. PNNI controller sends the SPVC provisioning request to the SPVC Manager.

    3. SPVC Manager allocates the PVC leg of the SPVC cross-connect, and adds it to the connection manager on the PNNI controller.

    4. PNNI controller stores the SPVC connection in the SPVC connection database and updates the standby controller.

    5. SPVC Manager sends an ack/nak response to NMS by SNMP set response.

    6. NMS receives `y', the destination AESA address and VCI2 from the slave end-point, and sends an SNMP set to add a master SPVC end-point with `y' and `p', the routing parameters (Figure 5-1)

    7. At the master end-point, the SPVC Manager initiates a call setup to the destination node.

For a more detailed description of provisioning a multiservice SPVC on a switch and controller, see ""Configuring ATM SVCs, PNNI Routing, and SPVCs"

SPVC Features

Table 5-1 describes the SPVC features supported for BPX-SES.


Table 5-1: BPX-SES SPVC Features
Feature Description

SPVC Connection

    • Support Point to point SPVC and SPVP connections

Service Modules Supported

    • BXM (T3, E3, OC12)

Connection Type Per Interface

    • co-existence of AR and SPVC

    • co-existence of MPLS and SPVC

    • Co-existence of SVC and SPVC

    • Limitation of SVC or SPVC only is configurable per interface.

SPVC provisioning

    • Administered by CLI and NMS at both the master and slave endpoint of the SPVC

    • Supports endpoint on UNI port or IISP trunk

SPVC routing

    • from the master to the slave endpoint

    • trunk signalling—ATM Forum PNNI 1.0 Specification

    • administrative weight based PNNI dynamic routing

    • static routing over IISP

    • SPVC resiliency, re-route from network failure

Endpoint Management

    • endpoint persistency—support generation of RDI and ASI at the service module

    • endpoint management—support LMI, ILMI and OAM at the service module

Operation

    • SPVC maintenance command—upping or downing of SPVC is supported through CLI command and NMS operation at the master endpoint

    • force re-route of SPVC is supported through CLI command and NMS operation at the service module of the master endpoint

    • route optimization commands to re-route SPVC to path with better accumulative administrative weight.

    • connection modification through connection re-establishment with new parameters

    • connection statistics is supported by the service modules.

    • connection and path trace

    • connectivity verification via tstconseg/tstdelay.

Reliability, Availability, and Serviceability

    • support hitless upgrade / downgrade

    • support operation and exception events logging

    • Hitless PNNI controller switchover

    • stable connections maintained during a switchover.

    • SPVC recovery after system rebuilt

Capacity

    • 50,000 SPVC/SVC connections per node

SPVC Endpoint Address and Default Traffic Parameters

The following sections discuss the provisioning of SPVC Endpoint Addresses and default traffic parameter templates for service categories.

SPVC Endpoint Address

The SPVC end-point routing references (referred to as x and y in Figure 5-2), returned by the connection provisioning modules after provisioning, are expressed in ATM AESA address, VPI, VCI.


Figure 5-2:


SPVC Routing Address

The x and y SPVC endpoints are expressed in (NASP address + VPI + VCI) as shown in Figure 5-2 above. They are assigned by the connection provisioning module for cross-reference with the end-points (ep1, ep2).

The default SPVC prefix is set to be the default PNNI node prefix. The SPVC prefix is initially set to 47.0091.81000000, but can be changed with the cnfspvcprfx command.

Default Traffic Parameter Templates for Service Categories

The SPVC commands do not contain all the fields needed for ABR service categories. PNNI controller provides templates with default values for all service categories for every interface. The user can modify these values as needed with the cnfcon command and the values are used for PNNI routing.

There are parameters that PNNI 1.0 signaling requires, but are not provided by the networking parameters, like the CDVT or MBS, they are configured at PNNI controller per interface. Refer to the SPVC CLI for setting default templates.


Table 5-2:

ABR Configurable Parameter

Description

Valid Range

Default Value

icr

Initial Cell Rate

0 to PCR

PCR

rif

Rate Increase Factor

1/32768 to 1

7

rdf

Rate Decrease Factor

1/32768 to 1

4

tbe

Transient Buffer Exposure

0 to 16777215 cells

1048320

nrm

maximum number of RM-cells.

2 to 256 cells

5

trm

time between RM-cells

100(2^-7) to 100(1^2) msec

8

adtf

ACR Decrease Time Factor

1 to 1023 sec

50

cdf

Cutoff Decrease Factor.

1/64 to 1

7

fsd

Fixed Source Delay

0 to 167.77215 sec

0

ABR Parameters

SPVC Provisioning and Operation

The following sections describe SPVC provisioning and operation:

Adding/Deleting an SPVC

Use NMS or the command line interface to add and delete an SPVC.

To add an SPVC, perform the following steps:


Step 1   Use the addcon command to provision the slave endpoint on the SPVC terminating service module.

Step 2   Use the addcon command to provision the master endpoint on the SPVC initiating service module.

Once the master endpoint is added, an SVC call setup establishes the SPVC.


To delete an SPVC, follow these steps:


Step 1   Use the delcon command to delete the master endpoint of the SPVC.

Step 2   Use the delcon command to delete the slave endpoint of the SPVC.



Note   If you delete the slave endpoint before deleting the master endpoint, the "master" endpoint will try to re-establish the connection until the "master" endpoint is also removed from the service module.


Note   For a Dax SPVC connection, deleting the master endpoint will also delete the slave endpoint. An SPVC can be deleted even while the SPVC is in operation mode.

Downing/Upping an SPVC

Use the NMS or the dncon and upcon commands to down or up connections.

When an SPVC is down, the SVC portion of the connection is released. The connection remains down until it is upped again by the user.

When an up request is received for a downed connection, the master endpoint will attempt to re-establish the connection.

The up and down request are only applicable to master endpoints. The request is rejected if issued to a slave endpoint.

Connection Modification

Use the NMS or the cnfcon command to modify connection parameters of an SPVC. The SPVC connection will be released and re-established. Depending on the bandwidth availability, no new path may be found, or the newly established path may be different from the original path.

Possible Provisioning Errors

The PNNI controller generates the following Provisioning Response error strings for SPVC connection addition failure:

Error Message Error Code

Local interface not existent or operational

41

Remote interface not existent or operational

42

Specified vpi/vci not available

43

Remote address is required

44

Fail to allocate endpoint, out of memory

45

Fail to add connection, out of memory

46

Error in traffic parameters

47

Slave endpoint does not exist for this daxcon

48

Slave endpoint not available for this daxcon

49

Endpoint does not exist

50

Not a persistent endpoint

51

Could not delete endpoint

52

Could not modify endpoint

53

Could not admin up endpoint

54

Could not admin down endpoint

55

Could not reroute connection

56

Operation not applicable to this endpoint

57

Connection is already in UP state

58

Connection is already in DOWN state

59

Mismatch in parameters with slave endpoint

60

Traffic parameters cannot be modified at slave endpoint of daxcon

61

Network busy, try later

62

Reroute not applicable to daxcon

63

Interface is operationally down

64

SPVC is not allowed on this partition

65

SPVC Call Blocking is enabled on this interface

66

SPVC is not allowed on this interface

67

Delete master end before deleting slave end for dax spvc connection

68

Connection doesn't exist to delete

69

Port does not support requested serviceType

70

lscr not allowed to exceed lpcr

71

rscr not allowed to exceed rpcr

72

lpcr must be defined for cbr serviceType

73

rpcr must be defined for cbr serviceType

74

lpcr and lscr must be defined for vbr serviceType

75

rpcr and rscr must be defined for vbr serviceType

76

lpcr must be defined for abr/ubr serviceType

77

rpcr must be defined for abr/ubr serviceType

78

Requested rcdv is too low

79

Requested rctd is too low

80

Requested max cell loss ratio (clr) is too high

81

Requested cell rate (lscr/lpcr) is too high

82

Requested cell rate (rscr/rpcr) is too high

83

Route/Re-route Retry

The master endpoint establishes the SPVC after it is provisioned. The first route attempt is immediate. If the first route attempt fails, subsequent retries are controlled by the "Fast Retry Interval Base" and the "Slow Retry Interval." Retries are separated by the following algorithm until the fast retry interval is larger than the slow retry interval:

(Fast Retry Interval Base * (2 ^ (# of attempts - 1)))

The succeeding retries will happen in every slow retry interval. There is no limit on the number of retries. Connections which are not established will be tried until they are successfully routed. This retry algorithm also applies to SPVCs which are released due to a network failure. SPVCs are re-established following the retry policy.

To ensure fairness in connection routing, unrouted connections are selected in a round robin fashion. To control congestion, a throttling scheme will be used to manage routing. These values can be configured on PNNI as nodal parameters through the cnfnodalcongth command.

Manual Reroute of SPVC

Use the NMS or the rrtcon command to manually reroute an SPVC. The SPVC connection will be re-routed to the best available path. The reroute request is only applicable to the master endpoints.


Note   The request is rejected if it is issued to a slave endpoint.

Force De-route/Re-route of SPVC

Use the dnpnport and uppnport commands to down a port. The deroute and reroute behavior depends on whether the downed port is a UNI or NNI as follows:

SPVC Call Blocking

Use the NMS or the cnfpnportcc command to enable/disable the SPVC Call Blocking option. If this SPVC Call Blocking is enabled on a port, no new provisioning requests for SPVC will be accepted by the port. SPVC calls that are already added/established will not be affected.

Connection Trace

Use the NMS or the conntrace command to trace the established path for an SPVC.

Connectivity Verification

Use the NMS or the tstdelay or tstconseg commands to request a continuity test between two endpoints of an SPVC.

Route Optimization

In the PNNI network, SPVC connections are established using the best available path at the time the connections are routed. Upon a network failure, SPVC connections are re-routed to an alternate path. However, this newly selected path may not be the optimal path for the connection. When the network failure is recovered, the SPVC connections shall be re-routed to optimize the network usage. It is a background SPVC management option, once enabled, it will try to find a better path for those SPVCs that are specified by the user. If a better path is found, the SPVC will be released from its current path and re-routed to the better path. A better path is a path which its administrative weight is less than the administrative weight of the current path by a certain percentage specified by the user.

Route Optimization Commands

Use the following commands to optimize the paths of SPVC connections:
cnfrteopt

Enable/disable route optimization on a port.

optrte

Kickoff route optimization immediately on an SPVC, a range of SPVCs or all SPVCs on a port.

cnfrteoptthld

Specifies the percentage reduction in the administrative weight of the existing path required to trigger route optimization

dsprteoptcnf

Display the route optimization configuration for a specific port or all ports.

dsprteoptstat

Display the optimization status for a specific port or all ports.

The user can query the route optimization status while it is in progress.


Note   The route optimization commands can only be executed at the command line interface on the controller.

SVC/SPVC Co-Existence

Currently SVC and SPVC connections both share the same pool of VPI resources on a port. Therefore, the VPI/VCI that is requested for an SPVC may already be used by an SVC. If this happens, the SPVC provisioning request will be rejected by the Service Module or by the PNNI controller depending on which module detects the collision. To avoid this problem, check the following:

Event Logging

Operation and exception events dealing with SPVCs are logged into the event log files. The event as well as the time it took place are entered into the log.

Considering the performance impacts of event logging and the potential for large numbers of SPVC related events occurring at one time, a CLI command is supported to enable or disable the logging of SPVC routing and status events.

SPVC Redundancy

Persistent Endpoints

The PNNI Control maintains the persistency of the routing parameters of the SPVC endpoint. The endpoint specify configuration is maintained in the endpoint database of the SPVCM.

Connection Configuration/Route

The routing parameters and endpoint database of the SPVC connection are stored redundantly in non-volatile storage as well as on the standby processor. The platform software provides the interface for storing the redundant data.

The path information of the SPVC connection is transient, and is removed from the call database once the call becomes active.

Rebuild

The controller re-establishes (re-route) connections after a processor reset.

Switchover

A switchover of the processor will not cause a disruption to the connectivity nor state of the SPVCs. The status of the endpoints will need to be re-synchronized between the controller and the service modules. Also, the state of the SPVC connections are re-synchronized with the neighbor (PNNI trunk) interfaces.

Connection Alarm Management

Connection alarm management provides the interface between the PVC aspects of an SPVC and the SVC aspects of an SPVC. Connection state must be reported across this interface and signalled across the network between the SPVC endpoints in both directions as shown in Figure 5-3. Connection status is reported to the CPE via the ILMI/LMI signalling or by using OAM flows. ILMI/LMI agents and OAM agents are resident on the line cards.


Figure 5-3:


SPVC Status Reporting

An SPVC connection status can change due to a number of reasons as stated below:

    1. A physical layer Interface failure occurs at the edge of the network. All SPVCs going through that interface are deemed failed.

    2. An ILMI or LMI failure occurs at the edge to the network (for example, a port communication failure). All SPVCs going through that interface are deemed failed.

    3. A physical layer Interface failure occurs within the network (for example, a trunk failure). As long as no other routes can be found for a SPVC flowing through that trunk interface, that SPVC is deemed failed.

    4. A trunk communication failure occurs within the network. As long as no other routes can be found for a SPVC flowing through that trunk interface, that SPVC is deemed failed.

    5. A failure occurs in an external network segment and AIS cells flow into the network on an SPVC. On detecting AIS on that SPVC at the port-endpoint, that SPVC is deemed failed.

    6. A failure occurs in an external network segment, connected via a feeder trunk and AIS flow into the network on an SPVC. On detecting AIS/Abit on that SPVC at the port-endpoint, that SPVC is deemed failed.

    7. When a port card is pulled out, all SPVCs going through that port card are deemed failed.

For failure case 1 and 2, the AIS cells have to be generated on all affected SPVCs into the network from the point of failure; this will be done by the service module.

The SM in BPX will do the following. (Figure 5-4 illustrates this case.)


Figure 5-4: Interface Failure at Network Edge and AIS Generation Into the Network


For failure case 3 and 4, at the moment of failure, all the affected SPVCs go into the de-routed state, and as long as alternate routes are not available the port-endpoints are made to generate AIS out the ports at the edge of the network. There are 4 steps involved as illustrated in Figure 5-5. Note that entities on either side of the trunk will detect the failure and execute the following:

    1. SM detects physical layer failure or controller detects trunk communication failure.

    2. if physical layer failure is detected, SM will report interface failure via an interface trap to the controller.

    3. Controller de-routes the SPVCs going through the failing trunk and programs SMs at the network-end nodes to generate AIS out the port towards the CPE/external segment on all affected SPVCs.

    4. CPE generates RDI in response, which flows through the network in the unaffected direction.

If re-routing is possible for any SPVCs, the controllers will re-route those SPVCs and this will automatically stop the AIS flows (and consequently the RDI flows) on those SPVCs.


Figure 5-5:



Interface Failure at a Trunk and AIS Generation Prior to Re-route

For failure case 5, if there is a feeder trunk or a CPE running LMI/ILMI at the edge of the network, SPVC failures (and their clearing) from the network side must be notified to the equipment using LMI STATUS_UPDATE messages/ILMI connection status traps. SPVC failures (from the network side) and their clearing are to be the detected based on the presence/absence of AIS cells flowing through the SPVC towards the CPE/feeder. The AIS state of an SPVC is detected at the INGRESS of the trunk-endpoint of SM in BPX. The controller is made known of the SPVC failures through bulk connection state traps by the SM in the BXM card which host the trunk-endpoint as shown in Figure 5-6.

The AIS state changes and SPVC status updates to the CPE/remote segment can be done in 3 steps as shown in Figure 5-6:

    1. Trunk-endpoint detects AIS set/clear condition on SPVCs and send a bulk conn trap indicating the state changes to the SES controller

    2. Controller sends the SPVC status change to the SMs which hosts the port-endpoints of the SPVCs which have a change of state via a bulk conn state set message. A single bulk conn state set message can include the state changes for multiple SPVCs.

    3. The SM hosting the port-endpoints generates LMI STATUS_UPDATE messages/ILMI connection state traps toward the remote segment/CPE.


Figure 5-6: Detection of AIS SPVC Status Update to CPE/Remote Segment


  For failure case 6, when a SPVC changes state (FAIL to ACTIVE or ACTIVE to FAIL) in an external segment connected via a feeder trunk (or an interface running LMI/ILMI), it will be indicated through A-bit changes in the LMI STATUS_UPDATE messages (or ILMI connection state traps in case of ILMI). This interface which is at the edge of the network must transport the remote segment SPVC state changes to the other end. This can be implemented by injecting AIS cells on the SPVC experiencing the A-bit failure into the network. At the other edge of the network, the AIS cells will be detected and corresponding A-bit status generated. When the A-bit failure clears at the local end, the AIS generation will be stopped and the remote end will indicate A-bit clear status. This mechanism is illustrated in Figure 5-7.

Figure 5-7:


Transporting Remote Segment Failure Through AIS in the Network

For failure case 7, the AIS cells have to be generated on all affected SPVCs into the network from the point of failure; this will be done by the SM and controller as follows:

    1. Generate an interface trap indicating the failure to the SES controller

    2. The controller sends the SPVC status change to the SMs which hosts the trunk-endpoints of the SPVCs which have a change of state via a bulk conn state set message. A single bulk conn state set message can include the state changes for multiple SPVCs.

Figure 5-8 illustrates this case.


Figure 5-8: Port Card Failure at the Network Edge and AIS Generation Into the Network


Reporting mechanism

There are two kinds of connection traps:

Connection traps are reported when:

  When an alarm summary count trap is sent to the NMS, the NMS queries for a list of connections in alarm and their exact status.

Table 5-3 describes supported connection traps.
Table 5-3: Connection Traps

TRAP Literal1

Trap description

Severity

Information Provided

TRAP_CHAN_ADDED
60301

Trap generated when a new connection is added

Information

(a) ifIndex

(b) VPI and VCI if applicable

(c) Upload configuration counter

TRAP_CHAN_DELETED
60302

Trap generated when a
connection is deleted

Information

(a) ifIndex

(b) VPI (and VCI if applicable)

(c) Upload configuration counter

TRAP_CHAN_ACTIVE

60302

Trap generated when a connection is out of alarm condition

Information

(a) ifIndex

(b) VPI (and VCI if applicable))

(c) Upload configuration counter

TRAP_CHAN_MODIFIED
60305

Trap generated when a
connection is modified

Information

(a) fIndex

(b) VPI (and VCI if applicable)

(c) Upload configuration counter

TRAP_CHAN_FAILED

60304

Trap generated when a connection are transitioning to failed from cleared state.

Information

(a) fIndex

(b) VPI (and VCI if applicable)

(c) Alarm Status

TRAP_CHAN_SUM_COUNT

60306

When number of connection traps exceeds maximum traps per reporting interval

Information

(a) summary connection alarm count

TRAP_CHAN_DOWNED

60307

Trap generated when a connection is administratively down

Information

(a) fIndex

(b) VPI (and VCI if applicable))

(c) Alarm Status

1Trap numbers 60301- 60307 are reserved for BPX SPVC Traps.

Alarm throttling

Alarm will be generated when an interface or a connection failed. Connection alarms usually occur in a "burst" and when they do, the system should not buckle under the instantaneous load. Hence several measures are adopted to throttle the alarm reporting:

    1. The first level of throttling occurs at the BXM Slave. This throttling is to prevent the flooding of the VSI interface. This is accomplished by reporting a bulk connection trap.

    2. The second level of throttling occurs at the SES. This throttling is to prevent the flooding of the SNMP interface with alarm traps. This is accomplished simply by buffering the traps in a FIFO. The FIFO helps to even out bursts of trap generation load in the system and space it over a period of time.

    3. Implement an alarm hierarchy, wherein connection alarms are suppressed when a port or line failure occurs. So, when an interface failure occurs, only interface failure alarm will be reported to NMS, no connection alarm will be reported to NMS.

SPVC Functionality on a Feeder Trunk

SPVC connection on a feeder trunk is supported in SES Release 1.0.10. The feeder nodes supporting the SPVC connection on a feeder trunk are MGX8850 R1 and MGX8220. An example of an SPVC connection on a feeder trunk is shown in Figure 5-9.


Figure 5-9: SPVC Feeder Connection


The SPVC feeder connection can be provisioned via a Cisco Wan Manager (CWM), or via the command line interface (CLI). With the SPVC Feeder supported on BPX/SES, the BPX can support AutoRoute PVC connection and PNNI SPVC connection on the same feeder trunk. The three segments PVC-SPVC-PVC connection is transparent to end users and is managed as a single connection.

To provision an SPVC feeder connection with CLI, a user needs to provision three connection segments and manually map the PVC vpi/vci with SPVC vpi/vci at segment junctions. Note that the same method was used for AutoRoute PVC feeder connections. Using CWM to provision a SPVC feeder connection, the CWM connection manager can set up an end-to-end connection so the user does not have to set up multiple connection segments and match the vpi/vci at every connection segment.

Provisioning of SPVC connections on a feeder requires partitioning the feeder trunk for PNNI and AutoRoute. Note that all of the ports on the feeder shelf can simultaneously support both PVC and SPVC feeder connection terminations since the ports/cards on the feeder shelf have information about whether they are connected to AutoRoute PVCs or PNNI SPVCs on the feeder trunk. Therefore, no partitioning of feeder port is necessary.

SPVC Feeder Connection Alarm Management

In PNNI network, when a SPVC is failed due to a trunk failure or a CPE endpoint failure, an AIS alarm message is sent to both ends of the SPVC CPE endpoints. A similar mechanism call A-bit alarm message is used by AutoRoute to carry PVC failure alarm. To support end-to-end alarm message in an SPVC feeder connection, the LMI/Annex G must be enabled on the BXM hosting the feeder trunk. This is needed for BXM to perform AIS and A-bit conversion for both AutoRoute PVC and SPVC connection termination on the same feeder trunk.

SPVC Feeder Connection OAM Functionality

Both PNNI SPVC and AutoRoute PVC support using OAM loopback to test connection data path integrity. tsdelay is used for OAM loopback toward network and tstconseg is used for OAM loopback toward CPE. To support end-to-end OAM loopback in an SPVC feeder connection, the SPVC end point on the feeder trunk needs to be programmed as "no segment end point" so that the OAM cells will not be terminated on the feeder trunk and continue sending to other segment of the connection

SPVC Stats Collection

The collected SPVC statistics allow you to properly engineer and overbook the network. SPVC statistics also provide accounting functionality for the SPVC feature. The following figure provides the architectural overview of the PXM BPX SPVC Statistics system.


Figure 5-10: BPX/SES SPVC Stats Collection Architecture


The stats collection operates as follows:

    1. The CCB configures the defined statistics levels (0-3) on BXM via CLI.

    2. At each bucket interval (5 minutes), the BXM Stat File Manager collects all the required statistics on a per SPVC connection basis. These are aggregated within a file interval. At the end of the interval (15 minutes) and after all the statistics are aggregated, BXM informs the BPX Stat Upload Manager in the PXM via VSI passthrough about the file interval creation.

    3. The PXM sends a file creation trap to NMS to relay the VSI passthrough.

    4. The NMS uploads the aggregated stat file using FTP via the LAN port on the PXM. The BPX Remote File Receiver in the PXM performs an ftp relay function from BXM to NMS. The stat file upload from BXM to NMS via PXM goes through a dedicated VC in SES uplink.


hometocprevnextglossaryfeedbacksearchhelp
Posted: Thu Sep 28 15:18:30 PDT 2000
Copyright 1989-2000©Cisco Systems Inc.