cc/td/doc/product/dsl_prod/vrmgtsw/vr4ov/rel235
hometocprevnextglossaryfeedbacksearchhelp
PDF

Table of Contents

Additional System Information

Additional System Information

This appendix presents additional information about the Cisco 6100 system capacities and ViewRunner for HP OpenView hardware recommendations.

A.1 Cisco 6100 System Capacities

The following table describes the capacities for the Cisco 6100 found in Release 2.3.5.


Table A-1: System Capabilities
Capability Capacity

Subscribers per Cisco 6100

400

VCs per Subscriber

4

Subscriber-side VCs per Cisco 6100 (locally attached)

1600

Subtended subscriber VCCs (each subtending port)

4,000

Network-side VCs per Cisco 6100

12,400

Network-side VPs per Cisco 6100

281

ATU-C Modem Ports per Cisco 6100

64

LIM Ports per Cisco 6100

400

POTS ports per Cisco 6100

400

Physical Pools per Cisco 6100

2

Logical Pools per Cisco 6100

6 (3 in each physical pool)

1The full ATM UNI 3.1 implementation of 256 VPCs per network interface is supported. However, the current software release restricts the available range of VPs to 0-27 (28 total). The remaining 228 VPs are reserved for subtending.

A.2 ViewRunner Hardware Recommendations

This section details the recommended hardware configuration for ViewRunner for HP OpenView Release 2.3.5. As with all hardware, memory and data recommendations, the customer's actual requirements may vary from the recommendations based on number of users, other processes being executed on the workstation(s), network configurations, etc. These recommendations should be considered as guidelines rather than as hard and fast rules for configuration.

A.2.1 Third-Party Software Requirements

ViewRunner for HP OpenView 2.3.5. depends on three specific third-party software packages:

A.2.2 Hardware Platform Recommendations

ViewRunner for HP OpenView 2.3.5 is a scalable, client-server application that may be run on a variety of Sun UltraSPARC hardware. Several variables affects sizing, including the number of Cisco 6100s that will be managed (number of objects), the number of simultaneous operator logins and map privileges (read-only and read-write), trap throughput generated by the network, and the throughput of the network between the server and the clients.


Note A fully configured Cisco 6100 is defined as one that contains 400 subscribers with four PVCs per subscriber. If a network is composed of more lightly configured systems (such as systems with 64 subscribers and one PVC per subscriber), the total number of managed Cisco 6100s would increase.

For small networks of fewer than 100 Cisco 6100s with a small number of operators (fewer than five), a single Sun UltraSPARC 1/2/10/30/60 workstation should be sufficient to manage the entire network. For somewhat larger networks of 200 or 250 nodes with up to 15 operators, a multiprocessor Sun UltraSPARC 2/60/250/450 should be used as the server. Ideally, as the number of operators increases, client workstations should be added to offload computing requirements from the server. Each Sun UltraSPARC 1/2/10/30/60/250 client workstation should be able to support up to five operators. Multiprocessor Sun UltraSPARC 2/60/250/450 client workstations should be able to handle up to 10 operators.

The number of operator logins also affects server sizing, due to the additional Oracle connections that are required. Each operator requires one Oracle connection per client session (that is, one for ViewMap and one for each View LoopRunner session that is running). A two-CPU system should be able to handle server side connections for up to 15 operators, and a four-CPU system should be able to handle up to 30 operators.

Larger networks of more than 250 nodes with more operators require an Enterprise-class Sun UltraSPARC. Sun Ultra Enterprise 450 with 4 processors should be capable of managing 300 nodes with up to 30 operator logins, with the associated external client workstations deployed to handle the number of operators.

A.2.3 Memory Requirements

The following table shows the physical memory required to run the server applications.


Table A-2: Physical Memory Requirements for Server Applications
Process Number of Instances Memory Required

View Process Monitor (vrProcessMon)

1

10MB

View Alarm Formatter (vrAlarmFormatter)

1

10MB

View Data Collector (vrDataCollect)

1

7MB

View Network Monitor (vrNetMon)

1 per 100 Cisco 6100s

32MB

View Alarm Synchronizer (vrAlarmSync)

varies

5MB

Entry Network Node Manager

1

32MB

Full Network Node Manager

1

64MB

Therefore, the ViewRunner servers will require 64MB to 128MB of RAM for networks up to 300 nodes, plus an additional 32MB to 64MB of RAM for HP OpenView. The Solaris 2.5.1 operating system is assumed to require an additional 32MB.

The Oracle7 system is optimized for 512MB of RAM per CPU but in general requires that the system global area be entirely contained in physical memory for optimum performance. Where possible, the entire database should fit in memory as well, but this may be cost-prohibitive. Therefore, a target of 50% of database size in physical memory is recommended.

Based on the defaults provided in the ViewRunner 2.3.5 server installation package, a small network would need approximately 400MB of disk space for the database, including 32MB of system global area. This translates to a requirement of 32MB to 400MB of RAM for Oracle.

A medium network would require 1.2GB of disk space, including 64MB of system global area. A large network would require 3.8GB of disk space and 128MB of system global area. The target memory size for the medium network for Oracle would be a least 640MB. Rounding up and accounting for the optimal 512MB of RAM per CPU, the medium network should have a dual-processor system with 1GB of RAM. The large network should have at least four processors, and 2GB of RAM. If more processors are used, 512MB of RAM should be added per CPU.

The following table shows the physical memory required to run the client applications.


Table A-3: Physical Memory Requirements for Client Applications
Process Number of Instances Memory Required

ovw

1

10MB

xnmevents

1

8MB

ipmap

1

7MB

ovwnavigator

1

4MB

View Map (viewMap)

1

16MB

View LoopRunner (viewLoopRunner)

varies

17MB

View Admin (viewAdmin)

0-1

10MB

To start HP OpenView with View Map enabled requires approximately 55MB, not including the operating system or the windowing system requirements. Each View LoopRunner instance requires an additional 17MB of RAM, and View Admin requires 10MB. Other HP OpenView utilities will use additional resources if they are executed. Assuming that each operator has two concurrent View LoopRunner sessions open at all times, then each logged in operator requires approximately 89MB of memory. Five operators on a single-CPU UltraSPARC requires 512MB of memory to keep the majority of the process space in RAM.

A.2.4 Disk Space Requirements

Disk space requirements are dependent on four factors.

The ViewRunner server and client packages require approximately 70MB of disk space each. If both are installed on the same workstation, as in small network configurations, they do not have to be on the same file system. OpenView requires approximately 200MB of disk space, 150MB of which is typically in the /opt file system, and 50MB of which is typically in the /var file system.


Note The installation of the required patches results in the copying of approximately 90MB of data to the /system directory. Make sure there is sufficient space in the /file system to handle this.

The Oracle7 installation requires 30MB or more of disk space. The small database configuration requires an additional 400MB. A medium configuration requires 1.2GB, and a large configuration requires 3.8GB. For best performance and reliability, the database, indices, redo logs, control files, and so on, should be distributed (and striped) across multiple disk drives. Oracle disk requirements are very complex and are beyond the scope of what Cisco can recommend. An experienced database administrator is the best source for recommendations as to the distribution of the database across available disks.

ViewRunner maintains log files for each server process. Each process has an associated trace level that dictates how much information is written to these logs. At the highest trace level, file sizes may reach into the hundreds of megabytes within just a few minutes. Normally, the servers are executed at the lowest trace level. Since files growing without bound could cause space problems, the user can specify a threshold size in the vrTrace utility to truncate the log file size when it reaches that threshold. Additionally, the root crontab contains a script that when used will automatically purge old log files (those not currently in use) weekly.

Finally, swap space should be configured to three times the physical memory size.

A.2.5 Sizing a System

The following table gives guidelines for selecting a system for your network size and number of operators. However, no two networks will perform identically, and user's tolerance to performance degradation is not something Cisco can predict. Therefore, these are general guidelines.


Table A-4: Server Platform
Network Size Network Nodes Local Client Sessions Total Client Sessions Server Platform CPUs RAM Disk

Small

< 100

Up to 5

Up to 10

Ultra 1/10/30

1

256MB-512MB

2.1GB-4.3GB

Medium

< 500

Up to 5

10 to 30

Ultra 2/60/250/450

2-4

512MB-1GB

4.3GB + optional
2.1GB x 6 disk array

Large

< 1000

Up to 5

< 30

Ultra 450/3000/+

< 4

2GB+ (512MB per CPU)

4.3GB + optional
4.3GB x 6 disk array


Note ViewRunner installation scripts allow the selection of small, medium, and large installation configurations. Even though ViewRunner 2.3.5 is limited to supporting 300 nodes, the large configuration can be installed and could result in improved performance.


Table A-5: Client Platform
Local Client Sessions Client Platform CPUs RAM Disk

Up to 5

Ultra 1/10/30

1

256MB-512MB

4.3GB

Up to 10

Ultra 2/60/250

2

512MB-1GB

4.3GB

The number of supported nodes will continue to increase. The database sizing indicated allows for growth in the number of supported nodes to two to four times the current value or more.

A.3 Additional Information

For additional information, there are a number of Oracle performance tuning guides available. Hewlett-Packard also includes a scalability guide in the documentation set for HP OpenView Network Node Manager.


hometocprevnextglossaryfeedbacksearchhelp
Posted: Fri Oct 8 13:11:32 PDT 1999
Copyright 1989-1999©Cisco Systems Inc.