|
|
Cisco multiservice management tools are standards-based and fully compatible with multivendor network environments and existing management systems. They are delivered in a layered, modular framework, with open interfaces at each layer.
Cisco's management framework is consistent with the Telecommunication Management Network (TMN) model developed by the ITU. Like TMN, Cisco uses a five-layer model that defines both the logical division and the communication between areas of the service provider's business operations and management processes. Consistent with this architecture, Cisco has developed a suite of service, network, and element management solutions.
Cisco service management solutions integrate with network and element management solutions to enable continuous management and control of the entire network or individual elements, such as hubs, routers, switches, probes, and data collection devices.
Cisco's enhanced TMN-based management architecture allows service providers to readily support standards as they are defined and adopted. The architecture uses the protocol best suited to the application's needs. For example, Simple Network Management Protocol (SNMP) represents and performs operations on management objects; Trivial File Transfer Protocol (TFTP) transfers large volumes of data; and telnet provides direct access to and control of network elements via the command-line interface.
At the element layer, equipment and device management functions are performed by CiscoView, a GUI-based application that provides dynamic status, statistics, and comprehensive configuration information for Cisco internetworking products (switches, routers, concentrators, and adapters). CiscoView provides the following core functions:
Configuration on the PXM will be done through an SNMP manager or CLI interface. All configuration information will be kept in a Management Information Base (MIB). SNMP will be used between an external management system and the platform to retrieve configuration data, provision logical and physical interfaces, distribute alarm information, gather real-time counters, and invoke diagnostic functions.
There are multiple network controller software modules (i.e., MPLS, PNNI) that talk with the platform software via the VSI (Virtual Switch interface) protocol to deal with their respective network topology routing and signaling. Every connection added will take away from the edge concentrator's pool of limited resources (i.e., total number of connections, bandwidth, or connection IDs). The resources that these controllers vie for include:
In parallel with this concept, two types of resource partitioning can be performed: card resource partitioning and port resource partitioning.
When a card is first brought up, the card partition consists of each controller sharing the maximum number of connections for the card. These values are enforced as the maximum number of connections against the card for that particular controller. These values are also inherited by the port resource partition when a port is created.
When a port is added, the port partition contains the connection identifier, bandwidth, and number of connections space per controller. By default, the port resources are again fully shared among the controllers and the connection space values are inherited from the card partition. The values specified in the port partition are advertised to the controllers and they are bound to these limits when adding connections for that port.
Customers who have multiple controller types should perform both card and port resource partitioning. Customers who have multiple controller types and also want to do card-level resource partitioning, would have to perform card resource partitioning first. Port resource partitioning would then be performed to divide the port resources and to further divide the card resources at the port level if so desired.
In a feeder mode, the MGX 8250 interfaces with a BPX ATM backbone network and acts as a shelf of the BPX. In a standalone mode, the MGX 8250 interfaces with a third party ATM network.
To add an end-to-end connection, from local feeder to remote feeder, one would have to add three connections that span three segments. To add an end-to-end connection, from local feeder to a routing node, one would have to add two connections that span two segments. To add an end-to-end connection in a standalone scenario, one would first need to add the local connection from the service module to the outbound user port. From that point, you would need to add a connection in the third-party network to the desired terminating device.
The number of steps to add an end-to-end connection can dramatically be reduced by using Cisco WAN Manager. CWM will allow the user to specify the originating and terminating end and the connection parameters using a GUI interface. All segment connections are added transparently.
The Connection Manager on CWM can be used to create and maintain end-to-end connections or Permanent Virtual Circuits (PVCs). A connection consists of a source (localEnd), a destination (remoteEnd) and a set of connection parameters required for the routing.
The Connection Template Manager feature is used to define a set of parameters so that they can be reused in Connection Manager to define connections. Templates can be saved to files and then used to create or modify connections.
The Multiple Users Security feature, in which each CWM user has their own access profile, is used to determine whether you have the rights to use each option in the CWM Connection Manager. The security mapping for CWM Connection Manager is:
The Cisco WAN solution is a distributed intelligent system design. All edge concentrators run independently of the Cisco WAN Manager. Should the Cisco WAN Manager become disabled, there is no effect on the network and its operations. The Cisco WAN Manager is essentially a window to the network and not the controlling "mind" of it. You can back up gateway configurations and store them on the Cisco WAN Manager.
The MGX 8250 service transparently supports end-to-end F4 and F5 OAM flows.
OAM cell processing uses a combination of hardware and software functionality within the MGX 8250. As the current standards for OAM cell processing are enhanced, Cisco plans to support new types of OAM cells with an upgrade to the software baseline.
There are several types of OAM cells implemented by the MGX 8250 itself that are used in the detection and restoration process combined with distributed network intelligence.
The MGX 8250 supports F1 flows for ATM over DS3/E3.
Connection-failure handling means supporting AIS (Alarm Indication Signal), RDI (Remote Detection Indicator), A-bit Alarm, test delay, and cc mechanism (continuity checking).
When there is a failure, AIS is generated by CPE or by the service module (SM). If AIS is generated by CPE, the SM will forward AIS to the network (to the other end). The other CPE when receives AIS, will send RDI. RDI is used as AIS acknowledgment.
For some SMs such as FRSM, the communication between the CPE and the SM will be A-bit and not AIS. In this case, if the SM receives A-bit alarm from CPE, then it will generate AIS to the network. If there is an interface failure, SM will generate an A-bit alarm and send it to the CPE.
CC mechanism is used to indicate to the far end that the connection is still alive. If the SM supports cc, then the detection of connection failure will be at the card level and not at a controller level (i.e., the SM can send an OAM cell every second and if it does not receive an OAM cell, it will fail the connection generating AIS or A-bit to the CPE. The cc OAM is not the OAM loopback.
OAM loopback is configurable at a card level and is supported on RPM/B. RPM/B polls for all the connections. The timer and polling frequency is configurable per connection. RPM/B sends an OAM loopback cell every 1 second (if use default value) and detects other end failure in 3 seconds (if using the default value), which are the same as cc timer and frequency.
Test delay is a CLI command that is used to test continuity of the connection. It uses OAM loopback cell.
Table 6-1 gives a status of the support of the cc feature for each card:
| Card | CC (Continuity Check) Support |
|---|---|
FRSM | Supports cc through OAM loopback cells |
AUSM | Doesn't support cc |
CESM | N/A (it doesn't receive traffic then declares the failure) |
VISM | Doesn't support cc. Will support AIS/RDI in rel 1.5 |
Table 6-2 summarizes the connection failure handling for the different service modules:
The tstcon command uses end-to-end loopback OAM cell. So if there are ATM switches in between, all switches will just pass the cell as it is to its neighbor. (This OAM cell will be turned around at the router if the ATM connection is till the router.) If the connection is between two VISMs with VoAAL2, other-side (terminating) VISM will loop the OAM loopback cell back.
So in short if you have VoIP connection, router being CPE device for that ATM link, will terminate the OAM loopback cell at its CPE interface. (Other side is all IP cloud on some physical medium, could be Ethernet, could be Token Ring, could be Frame Relay or ATM). This is the way ATM-forum has defined the OAM F5 flow. Operation can use this facility to trouble shoot ATM connection until router.
All alarms and events generated by the applications will be converted into traps and funneled through the central SNMP agent interface. The central alarm collector will create the actual SNMP Trap PDU. It will log the alarms and forward them to all registered SNMP managers. As part of the robust trap mechanism, the alarm distributor assigns a sequence number to each trap and saves them in a circular buffer. Managers that receive trap out of sequence have the option of retrieving the missing traps from the circular buffer using SNMP.
Traps are generated by applications on the PXM cards. Traps are also generated by legacy service modules. Traps are recorded on PXM in the Robust Trap Mechanism MIB, which is used to provide a concept of robustness to SNMP Traps. Certain types of traps are also classified as alarm events. An alarm has a trigger trap that leads to the alarm state and a corresponding clear trap that exits the alarm state. Alarms are logged as events and a hierarchy of them is maintained to reflect the current highest alarm at the shelf level and card level.
To support VISM and RPM cards, the proxy tasks has been modified to process and forward legacy-style traps generated by those service modules.
Fault detection includes hardware and software error malfunctions in all the MGX 8250 components, including service modules and common equipment. In addition, the edge concentrator provides a detailed alarm presentation, indicating alarm severity, type, date and time, alarm source, cause and state.
The edge concentrator records all log activity and maintains a historical log. For user-requested commands, alarm messages and status messages, the disk on the switch will hold 72 hours of information under normal conditions. In addition, the information can be downloaded to the Cisco WAN Manager workstation on an as-needed basis. Performance information is stored in user-definable buckets of 5, 10, 15, 30, or 60 minutes. These performance counters can be aggregated by the Cisco WAN Manager for report generation and analysis.
Alarm data is stored in a circular buffer. If the buffer fills, the oldest entries will be overwritten first. The alarm data file may be manually transferred via TFTP to a designated workstation. This capability could be automated by writing appropriate scripts on a UNIX station.
The MGX 8250 hardware faults are identified by a combination of network element name, shelf number, slot number, port number and front or back cards. This helps the operators or craft personnel to easily locate the hardware unit and to perform diagnostics or hardware replacement actions.
Alarms and problem reports are sent to the following:
The switch fabric provides real-time counters for performance monitoring as well as debugging. The real-time statistics are collected for four object types:
For each Object there can be several sub-objects (types of lines, ports, etc.), and for each sub-object type, there are several statistics.
The following counters are provided for PXM1:
The following counters are provided for SRM-3T3/B:
The following counters are provided for high-speed FRSM cards:
The following counters are provided for FRSM-T1E1 cards:
The following counters are provided for AUSM/B:
The following counters are provided for CESM-T1E1:
The following counters are provided for CESM-T3E3:
The MGX 8250 is capable of transmitting status reports to the Element Management layer on CWM/CiscoView. All the information about the element can be maintained in the status reports including information on switching matrix, modules, interfaces, and utilization of the element.
The MGX 8250 can send a full inventory report to report to the EM layer concerning the modules that make up its structure, including the hardware revisions and serial numbers, and the associated operating software.
The MGX 8250 supports the capability to schedule the performance counters in 5,10, 15, 30, and 60-minute intervals, as far as data collection is concerned. The data-collection intervals (commonly known as polling cycles) can be 15-minutes, 30-minutes, or 60-minutes long. These performance data counters can be aggregated at the Cisco WAN Manager to generate daily reports.
When the user logs into a MGX 8250 node, he or she is required to supply the user ID and password and the slot to direct input to. When the operator adds a new user, he/she has to specify the user ID and the access level. The choices for the privilege are GROUP1, GROUP2, GROUP3, GROUP4, GROUP5, or ANYUSER.
Each telnet session may be terminated by the user or by a timer, whose timer value is determined when the session is established. The timer signals the telnet connection to be terminated if the user does not provide any input for a certain period of time.
In contrast to every other Service Module in MGX 8250, the RPM will be driven by IOS CLI. RPM also requires the user to log in again using a possibly different User ID and password. This IOS-style authentication provides an initial entry to the router. Further authentication is required if the user needs access to more privileged commands
On the MGX 8250 platform, the UserID/Password is stored in the disk database. Currently UserID/Password is not encrypted.
Statistics are collected by the MGX 8250 periodically. Cisco's WAN Manager allows usage data collection from network connections and interfaces for innovative usage-based billing to customers.
The MGX 8250 will maintain CDRs for PVCs. The following information is contained in the CRDs:
This section is divided into two subsections. They are:
The Cisco WAN Manager, which integrates with HPOV, provides a complete and robust SNMP network management platform with a graphical user interface (GUI).
The WAN Manager Event Log displays descriptions of network- and operator-generated occurrences. Internally, event descriptions are generated as a result of the trap information, which transpires between the network management system and the network agents. Simple Network Management Protocol (SNMP) processes controls these traps.
An SNMP agent is software that is capable of answering valid queries from an SNMP station (such as the Cisco WAN Manager workstation), about information defined in the Management Information Base (MIB). A network device that provides information about the MIB to Cisco WAN Manager has an SNMP agent. Cisco WAN Manager and the SNMP agents exchange messages over the network's transport layer protocol.
The MGX 8250 Control Point Software provides a single and integrated point of control for managing the platform. It provides full-shelf and interface management for all hardware modules, service provisioning, and fault finding/diagnostic support for the complete shelf.
The preferred tools for configuring, monitoring, and controlling an MGX 8250 edge concentrator are the CiscoView and Cisco WAN Manager applications for equipment management and connection management, respectively.
The command line interface (CLI) is highly applicable during initial installation, troubleshooting, and any situation where low-level control is useful.
Each command falls into a range of command privilege levels. When a user ID is created, it is assigned a privilege level and can issue commands allowed by that level only.
The MGX 8250 provides the following CLI features:
The standard telnet command that is available from both the HPOV's topology map and the CWM topology map supports telnet access to MGX 8250. The telnet session will give the user access to the PXM card. From the PXM card, the user will be able to navigate to the desired service module by issuing the cc command.
This section has been split into the following four sections.
CiscoView is a GUI-based device management software application that provides dynamic status, real-time counters, and comprehensive configuration information for Cisco Systems' internetworking products (switches, routers, concentrators, and adapters). CiscoView graphically displays a real-time physical view of Cisco devices. Additionally, this SNMP-based network management tool provides monitoring functions and offers basic troubleshooting capabilities.
Using CiscoView, users can more easily understand the tremendous volume of management data available for internetworking devices, because CiscoView organizes it into graphical device representations presented in a clear, consistent format.
CiscoView will be used as the element management tool for the MGX 8250. CiscoView interacts directly with the edge concentrator agent.
CiscoView software can be integrated with several of the leading SNMP-based network management platforms, providing a seamless, powerful network view. It is also included within CW2000. CiscoView software can also be run on UNIX workstations as a fully functional, independent management application.
The key functions are:
Cisco WAN Manager (earlier known as Strataview Plus) is an SNMP-based multiprotocol management software package designed specifically for wide-area multiservice networks. It provides integrated service management and process automation to simplify the management of even the most complex networks. The Cisco WAN Manager allows you to easily monitor usage, provision connections, detect faults, configure devices, and track network statistics.
Cisco WAN Manager is designed to address the significant demands of managing and operating next-generation wide-area multiservice networks. The multiservice environment is more complex, with a greater number of connections and wider variety of services, making the administration of the network a potentially impossible task without the right tools.
Based on a robust, scalable architecture, Cisco WAN Manager not only meets today's business requirements for the control and operation, but also integrates with other Cisco network management products to provide end-to-end service management of wide-area multiservice networks.
The following features are available with Cisco WAN Manager:
Layer 2/Layer 3 Connection Management will be enhanced to support RPM as one of the ATM end points in end-end connection setup. CMProxy (Service Agent) will be enhanced to support this functionality. RPM port provisioning will not be supported via PortProxy.
Cisco Info Center is a real-time, high-performance service-level monitoring and diagnostic tool that provides network fault monitoring, trouble isolation, and real-time service-level management for multitechnology, multivendor networks. Cisco Info Center is designed to help operators focus on important network events, offering a combination of filtering, alarm reduction rules, flexible alarm viewing, and partitioning. It enables service levels to be specified and monitored, providing valuable insight into SLA conformance. Customer, VPN, and administrative partitioning and distribution of information are also supported by Cisco Info Center, further enhancing service providers' ability to manage the network and extend SLA monitoring capabilities to their customers. For example, a fault on an ATM trunk or change in an ATM grooming parameter may affect an IP VPN service. Using Cisco Info Center, a network operator is able to quickly focus on service-affecting alarms and understand both the services and customers affected by the fault. Service providers can also use Cisco Info Center's information partitioning capabilities to make this information available to their customers via the Web as an added service dimension.
The key benefits of the Cisco Info Center are:
The Cisco Provisioning Center makes delivering services to subscribers quick and easy with a rapid, error-free means of provisioning the network infrastructure. By integrating with a service order management system, Cisco Provisioning Center dramatically reduces the costs and time-to-market issues associated with service deployment by using flow-through service provisioning. For example, Cisco Provisioning Center provides powerful capabilities to automatically map a multitechnology VPN service to various underlying QoS parameters including Weighted Fair Queuing (WFQ) and Committed Access Rate (CAR) in Layer 3 and Available Bit Rate (ABR) and Constant Bit Rate (CBR) services in Layer 2 ATM.
Another unique feature of Cisco Provisioning Center is a service validation step that is tightly integrated into the multiphase commit process. This automated step ensures that a requested service such as premium Internet access can be provided by the network prior to committing it to deployment. This reduces rollbacks and ensures the operational integrity of the service provisioning process while enabling rapid, error-free service deployment. This automated step is essential for "self-service" provisioning by customers through a Web interface.
Automated, integrated provisioning with CPC offers several key benefits, including:
As an integrated L2/L3 tool, CPC supports not only provisioning of Cisco equipment end to end but also supports third-party blades for Newbridge and Ascend/Lucent. A blade is a generic interface between CPC and element managers. The support and acquisition of other vendor blades are attainable through Cisco's partner Syndesis, Ltd. Flow-through APIs enable integration with existing service OSS and CNM systems for order management, billing, and capacity planning for lower time to market and reduced cost of service. Operators have a choice of defining a service in technology and equipment-neutral terms for transparent deployment across a variety of equipment, or equipment-specific terms for services that take full advantage of specific element features.
CPC offers customer-extensible service. Each service offered by a provider is represented by a unique service object. Service objects allow operators to view the network in terms of end-user or subscriber services, or by a traditional set of nodes, ports, and circuits. Complex configuration changes are grouped into simple units that align with subscriber service orders. This grouping simplifies and accelerates order processing and improves order consistency.
CPC supports the rapid customization of new services, so providers can quickly develop and deploy new kinds of service by defining new classes of service object. Service objects can be added, deleted, and modified in single global operations which CPC breaks down into elementary actions on individual subnets or equipment. Decisions about how a service should be laid in are made by CPC and can be viewed by network operators or OSS applications. CPC ensures that the operation is applied successfully to all elements of the network in a coordinated manner. If any elementary action fails, then the entire operation is automatically rolled back and the original configurations are restored.
CPC is based on client/server architecture to support distributed computing through relational database systems. CPC runs under UNIX on Informix Version 7 and above and Solaris 2.5.1 and above. The distributed architecture allows CPC to address a full range of service provider capacity and throughput requirements.
The CPC database contains both the current state of the network configuration plus pending changes in the process of being deployed. A CPC administrator can view these events and decide when to upload topology information, or automated scripts can automatically upload the information.
CPC has two primary interfaces. The GUI allows operators to directly interact with service objects through a visual interface.
However, automated configuration is available using the flow-through interface, which allows provisioning and order processing applications to make high-level calls for configuration services. CPC can communicate with other applications via the flow-through interface using UNIX shell scripts, Java applets, or CORBA middleware.
The flow-through interface allows CPC to become an integral component of a service provider's total service creation and management system. Orders can flow directly from an existing order processing or customer care system into CPC for immediate service activation. Operators can view services, components of services, network connections, transactions, network elements, change requests, and logs.
CPC is based on advanced change management features that provide unprecedented reliability and control over service activation. All configuration changes associated with the same change to a service are applied in a single, network-wide transaction. Each change begins as a change request (CR) and includes an associated audit log.
The CPC database tracks resource allocation so that other system components always know what is available. When a service is created, service threaders use the resource and topology information to find the optimal end-to-end path through the network that satisfies a specified QoS level. Using this generic functionality, CPC-based systems can support features such as load sharing among Network-to-Network Interface (NNI) links and failure recovery based on the subscribed class of service (CoS).
As an application, CPC sits in the network and service management layers of the TMN model. Element managers are used by CPC through blades, which take advantage of the element manager as a configuration delivery mechanism.
Element managers such as Cisco WAN Manager, Cisco IP Manager, and Cisco Access Manager provide access to a specific type of equipment such as a suite of switching nodes from a particular vendor. Also called blades, element managers encapsulate specific knowledge about the equipment and translate it into an equipment-neutral representation. A blade can support a specific product, a subset of a vendor's entire product set, or the entire product set of a vendor. It can make use of other products such as the equipment manufacturer's own provisioning server to access network elements. CPC third-party blades for Newbridge and Cascade are attainable through Cisco partner Syndesis, Ltd.
To create a complete and working application, blades enable the CPC engine to configure all of the network elements that participate in providing a service. Services that span multiple equipment types require more than one blade.
When blades are installed in a system, subnetwork resources are published to the CPC database so that threaders can construct end-to-end services based on network policies. Threaders choose the best path after considering variables such as QoS requirements, total bandwidth consumption, under-utilized internetworks links, and lowest overall cost.
![]()
![]()
![]()
![]()
![]()
![]()
![]()
Posted: Mon Oct 2 17:02:53 PDT 2000
Copyright 1989-2000©Cisco Systems Inc.