|
|
ATM Soft Permanent Virtual Circuits (SPVCs), as implemented by the SES node, are described in the following topics:
![]() |
Note Prior to adding an ATM SPVC to the network, ATM signaling must be configured. Refer to Chapter 3, "ATM Signaling and Switched Virtual Circuits". |
SPVC connection provisioning can be done through an external Network Management System, such as Cisco Wan Manager (NMS), or via the command line interface. Only 2-end provisioning is supported in this release. When provisioning is done on an NMS, an SNMP set request is sent to both the SPVC endpoints terminating at the CPE ports.
SPVC provisioning includes:
Figure 5-1 shows an example of an end-to-end SPVC connection provisioning and establishment.

The end-points are reserved on the service modules prior to routing. The connection request is forwarded to the PNNI Controller. The SPVC is routed from the master endpoint to the slave endpoint.
The provisioning of multi-service SPVC on a switch and controller is performed as follows:
1. NMS sends an SNMP Set to the PNNI Node to add a slave SPVC end-point.
2. PNNI controller sends the SPVC provisioning request to the SPVC Manager.
3. SPVC Manager allocates the PVC leg of the SPVC cross-connect, and adds it to the connection manager on the PNNI controller.
4. PNNI controller stores the SPVC connection in the SPVC connection database and updates the standby controller.
5. SPVC Manager sends an ack/nak response to NMS by SNMP set response.
6. NMS receives `y', the destination AESA address and VCI2 from the slave end-point, and sends an SNMP set to add a master SPVC end-point with `y' and `p', the routing parameters (Figure 5-1)
7. At the master end-point, the SPVC Manager initiates a call setup to the destination node.
For a more detailed description of provisioning a multiservice SPVC on a switch and controller, see Chapter 10, "Configuring ATM SVCs, PNNI Routing, and SPVCs".
Table 5-1 describes the SPVC features supported for BPX-SES.
| Feature | Description |
|---|---|
SPVC Connection |
|
Service Modules Supported |
|
Connection Type Per Interface |
|
SPVC provisioning |
|
SPVC routing |
|
Endpoint Management |
|
Operation |
|
Reliability, Availability, and Serviceability |
|
Capacity |
|
The following sections discuss the provisioning of SPVC Endpoint Addresses and default traffic parameter templates for service categories.
The SPVC end-point routing references (referred to as x and y in Figure 5-2), returned by the connection provisioning modules after provisioning, are expressed in ATM AESA address, VPI, VCI.

The x and y SPVC endpoints are expressed in (NASP address + VPI + VCI) as shown in Figure 5-2 above. They are assigned by the connection provisioning module for cross-reference with the end-points (ep1, ep2).
The default SPVC prefix is set to be the default PNNI node prefix. The SPVC prefix is initially set to 47.0091.81000000, but can be changed with the cnfspvcprfx command.
The SPVC commands do not contain all the fields needed for ABR service categories. PNNI controller provides templates with default values for all service categories for every interface. The user can modify these values as needed with the cnfcon command and the values are used for PNNI routing.
There are parameters that PNNI 1.0 signaling requires, but are not provided by the networking parameters, like the CDVT or MBS, they are configured at PNNI controller per interface. Refer to the SPVC CLI for setting default templates.
ABR Configurable Parameter | Description | Valid Range | Default Value |
icr | Initial Cell Rate | 0 to PCR | PCR |
rif | Rate Increase Factor | 1/32768 to 1 | 7 |
rdf | Rate Decrease Factor | 1/32768 to 1 | 4 |
tbe | Transient Buffer Exposure | 0 to 16777215 cells | 1048320 |
nrm | maximum number of RM-cells. | 2 to 256 cells | 5 |
trm | time between RM-cells | 100(2^-7) to 100(1^2) msec | 8 |
adtf | ACR Decrease Time Factor | 1 to 1023 sec | 50 |
cdf | Cutoff Decrease Factor. | 1/64 to 1 | 7 |
fsd | Fixed Source Delay | 0 to 167.77215 sec | 0 |
The following sections describe SPVC provisioning and operation:
Use NMS or the command line interface to add and delete an SPVC.
To add an SPVC, perform the following steps:
Step 2 Use the addcon command to provision the master endpoint on the SPVC initiating service module.
Once the master endpoint is added, an SVC call setup establishes the SPVC.
To delete an SPVC, follow these steps:
Step 2 Use the delcon command to delete the slave endpoint of the SPVC.
![]() |
Note If you delete the slave endpoint before deleting the master endpoint, the "master" endpoint will try to re-establish the connection until the "master" endpoint is also removed from the service module. |
![]() |
Note For a Dax SPVC connection, deleting the master endpoint will also delete the slave endpoint. An SPVC can be deleted even while the SPVC is in operation mode. |
Use the NMS or the dncon and upcon commands to down or up connections.
When an SPVC is down, the SVC portion of the connection is released. The connection remains down until it is upped again by the user.
When an up request is received for a downed connection, the master endpoint will attempt to re-establish the connection.
The up and down request are only applicable to master endpoints. The request is rejected if issued to a slave endpoint.
Use the NMS or the cnfcon command to modify connection parameters of an SPVC. The SPVC connection will be released and re-established. Depending on the bandwidth availability, no new path may be found, or the newly established path may be different from the original path.
The PNNI controller generates the following Provisioning Response error strings for SPVC connection addition failure:
| Error Message | Error Code |
Local interface not existent or operational | 41 |
Remote interface not existent or operational | 42 |
Specified vpi/vci not available | 43 |
Remote address is required | 44 |
Fail to allocate endpoint, out of memory | 45 |
Fail to add connection, out of memory | 46 |
Error in traffic parameters | 47 |
Slave endpoint does not exist for this daxcon | 48 |
Slave endpoint not available for this daxcon | 49 |
Endpoint does not exist | 50 |
Not a persistent endpoint | 51 |
Could not delete endpoint | 52 |
Could not modify endpoint | 53 |
Could not admin up endpoint | 54 |
Could not admin down endpoint | 55 |
Could not reroute connection | 56 |
Operation not applicable to this endpoint | 57 |
Connection is already in UP state | 58 |
Connection is already in DOWN state | 59 |
Mismatch in parameters with slave endpoint | 60 |
Traffic parameters cannot be modified at slave endpoint of daxcon | 61 |
Network busy, try later | 62 |
Reroute not applicable to daxcon | 63 |
Interface is operationally down | 64 |
SPVC is not allowed on this partition | 65 |
SPVC Call Blocking is enabled on this interface | 66 |
SPVC is not allowed on this interface | 67 |
Delete master end before deleting slave end for dax spvc connection | 68 |
Connection doesn't exist to delete | 69 |
Port does not support requested serviceType | 70 |
lscr not allowed to exceed lpcr | 71 |
rscr not allowed to exceed rpcr | 72 |
lpcr must be defined for cbr serviceType | 73 |
rpcr must be defined for cbr serviceType | 74 |
lpcr and lscr must be defined for vbr serviceType | 75 |
rpcr and rscr must be defined for vbr serviceType | 76 |
lpcr must be defined for abr/ubr serviceType | 77 |
rpcr must be defined for abr/ubr serviceType | 78 |
Requested rcdv is too low | 79 |
Requested rctd is too low | 80 |
Requested max cell loss ratio (clr) is too high | 81 |
Requested cell rate (lscr/lpcr) is too high | 82 |
Requested cell rate (rscr/rpcr) is too high | 83 |
The master endpoint establishes the SPVC after it is provisioned. The first route attempt is immediate. If the first route attempt fails, subsequent retries are controlled by the "Fast Retry Interval Base" and the "Slow Retry Interval." Retries are separated by the following algorithm until the fast retry interval is larger than the slow retry interval:
(Fast Retry Interval Base * (2 ^ (# of attempts - 1)))
The succeeding retries will happen in every slow retry interval. There is no limit on the number of retries. Connections which are not established will be tried until they are successfully routed. This retry algorithm also applies to SPVCs which are released due to a network failure. SPVCs are re-established following the retry policy.
To ensure fairness in connection routing, unrouted connections are selected in a round robin fashion. To control congestion, a throttling scheme will be used to manage routing. These values can be configured on PNNI as nodal parameters through the cnfnodalcongth command.
Use the NMS or the rrtcon command to manually reroute an SPVC. The SPVC connection will be re-routed to the best available path. The reroute request is only applicable to the master endpoints.
![]() |
Note The request is rejected if it is issued to a slave endpoint. |
Use the dnpnport and uppnport commands to down a port. The deroute and reroute behavior depends on whether the downed port is a UNI or NNI as follows:
Use the NMS or the cnfpnportcc command to enable/disable the SPVC Call Blocking option. If this SPVC Call Blocking is enabled on a port, no new provisioning requests for SPVC will be accepted by the port. SPVC calls that are already added/established will not be affected.
Use the NMS or the conntrace command to trace the established path for an SPVC.
Use the NMS or the tstdelay or tstconseg commands to request a continuity test between two endpoints of an SPVC.
In the PNNI network, SPVC connections are established using the best available path at the time the connections are routed. Upon a network failure, SPVC connections are re-routed to an alternate path. However, this newly selected path may not be the optimal path for the connection. When the network failure is recovered, the SPVC connections shall be re-routed to optimize the network usage. It is a background SPVC management option, once enabled, it will try to find a better path for those SPVCs that are specified by the user. If a better path is found, the SPVC will be released from its current path and re-routed to the better path. A better path is a path which its administrative weight is less than the administrative weight of the current path by a certain percentage specified by the user.
Use the following commands to optimize the paths of SPVC connections:
| cnfrteopt | Enable/disable route optimization on a port. |
|---|---|
optrte | Kickoff route optimization immediately on an SPVC, a range of SPVCs or all SPVCs on a port. |
cnfrteoptthld | Specifies the percentage reduction in the administrative weight of the existing path required to trigger route optimization |
dsprteoptcnf | Display the route optimization configuration for a specific port or all ports. |
dsprteoptstat | Display the optimization status for a specific port or all ports. |
The user can query the route optimization status while it is in progress.
![]() |
Note The route optimization commands can only be executed at the command line interface on the controller. |
Currently SVC and SPVC connections both share the same pool of VPI resources on a port. Therefore, the VPI/VCI that is requested for an SPVC may already be used by an SVC. If this happens, the SPVC provisioning request will be rejected by the Service Module or by the PNNI controller depending on which module detects the collision. To avoid this problem, check the following:
Operation and exception events dealing with SPVCs are logged into the event log files. The event as well as the time it took place are entered into the log.
Considering the performance impacts of event logging and the potential for large numbers of SPVC related events occurring at one time, a CLI command is supported to enable or disable the logging of SPVC routing and status events.
The PNNI Control maintains the persistency of the routing parameters of the SPVC endpoint. The endpoint specify configuration is maintained in the endpoint database of the SPVCM.
The routing parameters and endpoint database of the SPVC connection are stored redundantly in non-volatile storage as well as on the standby processor. The platform software provides the interface for storing the redundant data.
The path information of the SPVC connection is transient, and is removed from the call database once the call becomes active.
The controller re-establishes (re-route) connections after a processor reset.
A switchover of the processor will not cause a disruption to the connectivity nor state of the SPVCs. The status of the endpoints will need to be re-synchronized between the controller and the service modules. Also, the state of the SPVC connections are re-synchronized with the neighbor (PNNI trunk) interfaces.
Connection alarm management provides the interface between the PVC aspects of an SPVC and the SVC aspects of an SPVC. Connection state must be reported across this interface and signalled across the network between the SPVC endpoints in both directions as shown in Figure 5-3. Connection status is reported to the CPE via the ILMI/LMI signalling or by using OAM flows. ILMI/LMI agents and OAM agents are resident on the line cards.

An SPVC connection status can change due to a number of reasons as stated below:
1. A physical layer Interface failure occurs at the edge of the network. All SPVCs going through that interface are deemed failed.
2. An ILMI or LMI failure occurs at the edge to the network (for example, a port communication failure). All SPVCs going through that interface are deemed failed.
3. A physical layer Interface failure occurs within the network (for example, a trunk failure). As long as no other routes can be found for a SPVC flowing through that trunk interface, that SPVC is deemed failed.
4. A trunk communication failure occurs within the network. As long as no other routes can be found for a SPVC flowing through that trunk interface, that SPVC is deemed failed.
5. A failure occurs in an external network segment and AIS cells flow into the network on an SPVC. On detecting AIS on that SPVC at the port-endpoint, that SPVC is deemed failed.
6. A failure occurs in an external network segment, connected via a feeder trunk and AIS flow into the network on an SPVC. On detecting AIS/Abit on that SPVC at the port-endpoint, that SPVC is deemed failed.
7. When a port card is pulled out, all SPVCs going through that port card are deemed failed.
For failure case 1 and 2, the AIS cells have to be generated on all affected SPVCs into the network from the point of failure; this will be done by the service module.
The SM in BPX will do the following. (Figure 5-4 illustrates this case.)

For failure case 3 and 4, at the moment of failure, all the affected SPVCs go into the de-routed state, and as long as alternate routes are not available the port-endpoints are made to generate AIS out the ports at the edge of the network. There are 4 steps involved as illustrated in Figure 5-5. Note that entities on either side of the trunk will detect the failure and execute the following:
1. SM detects physical layer failure or controller detects trunk communication failure.
2. if physical layer failure is detected, SM will report interface failure via an interface trap to the controller.
3. Controller de-routes the SPVCs going through the failing trunk and programs SMs at the network-end nodes to generate AIS out the port towards the CPE/external segment on all affected SPVCs.
4. CPE generates RDI in response, which flows through the network in the unaffected direction.
If re-routing is possible for any SPVCs, the controllers will re-route those SPVCs and this will automatically stop the AIS flows (and consequently the RDI flows) on those SPVCs.

For failure case 5, if there is a feeder trunk or a CPE running LMI/ILMI at the edge of the network, SPVC failures (and their clearing) from the network side must be notified to the equipment using LMI STATUS_UPDATE messages/ILMI connection status traps. SPVC failures (from the network side) and their clearing are to be the detected based on the presence/absence of AIS cells flowing through the SPVC towards the CPE/feeder. The AIS state of an SPVC is detected at the INGRESS of the trunk-endpoint of SM in BPX. The controller is made known of the SPVC failures through bulk connection state traps by the SM in the BXM card which host the trunk-endpoint as shown in Figure 5-6.
The AIS state changes and SPVC status updates to the CPE/remote segment can be done in 3 steps as shown in Figure 5-6:
1. Trunk-endpoint detects AIS set/clear condition on SPVCs and send a bulk conn trap indicating the state changes to the SES controller
2. Controller sends the SPVC status change to the SMs which hosts the port-endpoints of the SPVCs which have a change of state via a bulk conn state set message. A single bulk conn state set message can include the state changes for multiple SPVCs.
3. The SM hosting the port-endpoints generates LMI STATUS_UPDATE messages/ILMI connection state traps toward the remote segment/CPE.


For failure case 7, the AIS cells have to be generated on all affected SPVCs into the network from the point of failure; this will be done by the SM and controller as follows:
1. Generate an interface trap indicating the failure to the SES controller
2. The controller sends the SPVC status change to the SMs which hosts the trunk-endpoints of the SPVCs which have a change of state via a bulk conn state set message. A single bulk conn state set message can include the state changes for multiple SPVCs.
Figure 5-8 illustrates this case.

There are two kinds of connection traps:
Connection traps are reported when:
Table 5-3 describes supported connection traps.
TRAP Literal1 | Trap description | Severity | Information Provided |
|---|---|---|---|
TRAP_CHAN_ADDED | Trap generated when a new connection is added | Information | (a) ifIndex (b) VPI and VCI if applicable (c) Upload configuration counter |
TRAP_CHAN_DELETED | Trap generated when a | Information | (a) ifIndex (b) VPI (and VCI if applicable) (c) Upload configuration counter |
TRAP_CHAN_ACTIVE 60302 | Trap generated when a connection is out of alarm condition | Information | (a) ifIndex (b) VPI (and VCI if applicable)) (c) Upload configuration counter |
TRAP_CHAN_MODIFIED | Trap generated when a | Information | (a) fIndex (b) VPI (and VCI if applicable) (c) Upload configuration counter |
TRAP_CHAN_FAILED 60304 | Trap generated when a connection are transitioning to failed from cleared state. | Information | (a) fIndex (b) VPI (and VCI if applicable) (c) Alarm Status |
TRAP_CHAN_SUM_COUNT 60306 | When number of connection traps exceeds maximum traps per reporting interval | Information | (a) summary connection alarm count |
TRAP_CHAN_DOWNED 60307 | Trap generated when a connection is administratively down | Information | (a) fIndex (b) VPI (and VCI if applicable)) (c) Alarm Status |
| 1Trap numbers 60301- 60307 are reserved for BPX SPVC Traps. |
Alarm will be generated when an interface or a connection failed. Connection alarms usually occur in a "burst" and when they do, the system should not buckle under the instantaneous load. Hence several measures are adopted to throttle the alarm reporting:
1. The first level of throttling occurs at the BXM Slave. This throttling is to prevent the flooding of the VSI interface. This is accomplished by reporting a bulk connection trap.
2. The second level of throttling occurs at the SES. This throttling is to prevent the flooding of the SNMP interface with alarm traps. This is accomplished simply by buffering the traps in a FIFO. The FIFO helps to even out bursts of trap generation load in the system and space it over a period of time.
3. Implement an alarm hierarchy, wherein connection alarms are suppressed when a port or line failure occurs. So, when an interface failure occurs, only interface failure alarm will be reported to NMS, no connection alarm will be reported to NMS.
The collected SPVC statistics allow you to properly engineer and overbook the network. SPVC statistics also provide accounting functionality for the SPVC feature. The following figure provides the architectural overview of the PXM BPX SPVC Statistics system.

The stats collection operates as follows:
1. The CCB configures the defined statistics levels (0-3) on BXM via CLI.
2. At each bucket interval (5 minutes), the BXM Stat File Manager collects all the required statistics on a per SPVC connection basis. These are aggregated within a file interval. At the end of the interval (15 minutes) and after all the statistics are aggregated, BXM informs the BPX Stat Upload Manager in the PXM via VSI passthrough about the file interval creation.
3. The PXM sends a file creation trap to NMS to relay the VSI passthrough.
4. The NMS uploads the aggregated stat file using FTP via the LAN port on the PXM. The BPX Remote File Receiver in the PXM performs an ftp relay function from BXM to NMS. The stat file upload from BXM to NMS via PXM goes through a dedicated VC in SES uplink.
![]()
![]()
![]()
![]()
![]()
![]()
![]()
Posted: Thu Jul 27 19:56:19 PDT 2000
Copyright 1989-2000©Cisco Systems Inc.