|
|
IBM's networking architecture has evolved considerably as computing in general has evolved away from domination by centralized computing solutions to peer-based computing. Today, IBM Systems Network Architecture (SNA) routing involves two separate kinds of environments, although a number of key concepts are central to all SNA routing situations. This chapter addresses functions and services that make both SNA subarea routing and Advanced Peer-to-Peer Networking (APPN) routing possible. Topics covered include session connections, transmission groups, explicit and virtual routes, and class of service (CoS). Refer to "IBM Systems Network Architecture (SNA) Protocols," for general information about traditional IBM SNA and APPN. Figure 37-1 illustrates the concepts addressed in this chapter in the context of a traditional SNA environment.

IBM SNA session connectors are used to bridge address spaces when sessions traverse multiple address spaces. Three types of session connectors exist: boundary functions, SNA network interconnection (SNI) gateways, and APPN intermediate routing functions. Boundary functions reside in subarea nodes and map between subarea and peripheral address spaces. SNI gateways act as bridges between SNA networks, accepting data from one network and transmitting it to the appropriate destination in another network. SNI gateways are transparent to end point network attachment units (NAUs). APPN intermediate nodes perform intermediate routing within APPN networks. Refer to Figure 37-1 for the relative position of a session connector in a traditional SNA environment.
IBM SNA transmission groups (TGs) are logical connections formed between adjacent IBM SNA nodes that are used to pass SNA session traffic. TGs are comprised of one or more SNA links and their assigned transmission priorities. Multi-link TGs, which provide added reliability and bandwidth, are used to bundle multiple physical links into a single logical SNA link. Multi-link TGs are supported only between T4 nodes. TG sequence numbers are used to resequence out-of-order messages at each hop. Four transmission priorities are supported at each transmission group: low, medium, high, and network-service traffic (the highest priority). Refer to Figure 37-1 for an illustration of the relationship of TGs with respect to other common SNA routing components within the context of a subarea routing environment.
Routes between subareas assume either an explicit or virtual route form. Explicit routes are the physical connections between two subarea nodes and serve as the ordered sequences of subareas and connecting transmission groups. Explicit routes are unidirectional, and two explicit routes are required to create a full-duplex path. Virtual routes are two-way logical connections formed between two subarea nodes. A virtual route flows over both an explicit route and a reverse explicit route that follows the same physical path. Virtual routes do not cross network boundaries; instead, they use an SNA network interconnect session connector to bridge two virtual routes. Virtual routes include values defining transmission priority and global flow control, which is provided by pacing, where a receiver with sufficient buffer space grants pacing windows to the sender. Each pacing window enables the sender to transmit a certain amount of information before the sender must request the next pacing window. Refer to Figure 37-1 for an illustration of the relationship between explicit routes and virtual routes, and their relative position in the context of an SNA subarea routing environment.
The IBM SNA class-of-service function designates the transport network characteristics of a given session. Depending on user requirements, different CoSs can be specified in an SNA network. CoS provides the mechanism to determine all SNA routes and describes acceptable service levels for a session. CoS also specifies the collection of session characteristics, including response time, security, and availability. In addition, CoS can be established automatically when logging in, or manually (by the user) when the session is initiated. Each CoS name is associated with a list of virtual routes that meet the desired service-level requirement. Relevant information for a given session is captured in CoS subarea and APPN tables. The differences between CoS implementation in subarea and APPN routing are summarized in the following sections.
In subarea routing, the user defines CoS support required for a particular session. Specific virtual routes are mapped to identified services, while CoS characteristics are associated with the underlying explicit routes. The System Services Control Point (SSCP) uses the CoS table to provide information on virtual routes and transmission priority to the path-control function. Path control in turn selects a virtual route and transmission priority for use in a session. Figure 37-2 illustrates the subarea routing CoS table-entry format.

Subarea routing CoS table entries include CoS name, virtual-route number (VRN), and subarea transmission priority (TRPI).
CoS name is a standard name, such as SEC3, that is agreed upon by conventions.
The VRN identifies a specific route between subareas. Up to eight virtual-route numbers can be assigned between two subarea nodes. Each virtual route can be assigned with up to three different transmission priorities, and up to 24 virtual routes are possible between two subareas.
TPRI identifies the priority of logical unit-to-logical unit (LU-to-LU) session data flowing over an explicit route. Users can select one of three priorities for each virtual route: 0 (lowest), 1, or 2 (highest).
CoS in APPN is defined explicitly with CoS table parameters. CoS is more granular in APPN than subarea SNA. In particular, CoS for APPN allows a route to be defined based on capacity, cost, security, propagation delay, and user-defined characteristics. It extends service to end nodes (ENs) and is not limited to communications controllers, as in subarea SNA. APPN CoS permits the topology database to maintain a tree for every CoS that tracks all routes and costs. APPN CoS also provides a configuration option to control memory dedicated to CoS trees. Figure 37-3 illustrates the APPN routing CoS table-entry format.

APPN routing CoS table entries include CoS name, index, APPN Transmission Priority (TPRI), characteristics, and APPN CoS Weighted Field (WF).
The CoS name is a standard name, such as SEC3, that is agreed upon by conventions.
The index field entry enables computed weight values for route components to be stored and retrieved. This entry points to the entry in the CoS weight array where the weights for the CoS are stored.
Node and transmission group (TG) characteristics consist of a user-specified list of characteristics acceptable for identified CoS. Each row defines either a set of node characteristics or a set of TG characteristics. Entries can include security, cost per connect time, and available capacity. The field representing a characteristic contains a range of acceptable values.
The APPN CoS WF enables routes-selection services (RSS) to assign a weight to a given possible route component (node or TG). WF is used by RSS to determine relative desirability of a particular route component. The WF can contain a constant or the name of a function that RSS uses in weight calculation.
SNA networks are divided into logical areas: subareas and domains. Subareas consist of a subarea node and its attached peripherals. Domains consist of a system services control point (SSCP) and the network resources that it can control. SSCPs in different domains can cooperate with one another to compensate for host processor failures. Figure 37-4 illustrates the relationship between subareas and domains in the context of SNA subarea routing.
Node addresses are categorized as subarea- and peripheral-node addresses. Subarea-node addresses are global and must be unique within the entire network. These addresses are assigned to NAUs when activated. Subarea-node addresses generally consist of a subarea portion and an element portion. All NAUs within a given subarea share the same subarea address but have different element addresses.
Peripheral-node addresses, which are considered local addresses, differ depending on whether the node is T2 or T2.1. T2 addresses refer to NAUs and are statically assigned, while T2.1 addresses are dynamically assigned for the duration of a session and identify the session rather than the NAU. Peripheral-node addresses are referred to as local form-session identifiers.

APPN routing is dynamic and is based on a least-weight path calculated from input received from all APPN network nodes. Each APPN network node is responsible for reporting changes in its local topology (that is, the node itself and the attached links). Topology information is passed until all APPN nodes receive it. When a node receives data it already has, it stops forwarding the data to other nodes. Duplicate information is recognized via a check of update sequence numbers. Figure 37-5 illustrates where APPN network nodes fit into the general scheme of an APPN environment with ENs and low-entry network (LEN) nodes.

Several underlying functions and capabilities enable APPN routing. These include Node Type 2.1 routing, Dependent Logical-Unit Requester/Server (DLUR/S) routing, connections networks, and border nodes.
ISR-supported subarea SNA features include node-to-node error and flow-control processing, as well as session switching around network failures. Node-to-node error and flow-control processing are considered redundant and unnecessary because these processes reduce end-to-end throughput.
The High-Performance Routing (HPR) protocol, an alternative to ISR, is based on two key components: Rapid-Transport Protocol (RTP) and Automatic Network Routing (ANR). RTP is a reliable, connection-oriented protocol that ensures delivery and manages end-to-end network error and flow control. RTP creates new routes following a network failure. ANR is a connectionless service that is responsible for node-to-node source-routed service.
The RTP layer is invoked only at the edges of an APPN network. In intermediate nodes, only the ANR layer is invoked. RTP nodes establish RTP connections to carry session data. All traffic for a single session flows over the same RTP-to-RTP connection and is multiplexed with traffic from other sessions using the same connection. Figure 37-6 illustrates the overall architecture of an HPR-based routing environment.

A typical HPR routing process involves several stages. First, a route is selected while it is using ISR. To establish a connection between the edge RTP nodes, either an existing RTP-to-RTP connection is used or a Route-Services Request (RSR) is sent. The returned Route-Services Reply (RSP) carries information showing forward and reverse paths through the network.
Paths represent the forward and reverse port lists and include the port identifier used in each ANR node. These lists are carried on every message, eliminating the need for routing tables or session connectors in the ANR nodes.
HPR provides for link-failure recovery. If a link fails and an alternate path exists between the RTP end points for a particular CoS, a new RTP-to RTP connection can be selected and a session can be moved without disruption. If a connection does not exist along the new path, RSR and RSP messages are sent to obtain the new port lists. Sending a new BIND is not required because the session is not disrupted.
Flow control in an HRP environment uses a technique called adaptive rate-based (ARB) flow control. ARB flow control monitors and controls the amount of traffic introduced into the network. Under ARB flow control, the sending and receiving RTP nodes exchange messages at regular intervals. Traffic introduced into the network is modified to adapt to network conditions.
The Dependent Logical-Unit Requester/Server (DLUR/S) is an APPN feature that allows legacy SNA traffic to flow on an APPN network.
Under DLUR/S, a client/server relationship is established between a Dependent Logical-Unit Server (DLUS) and a Dependent Logical-Unit Requester (DLUR). A DLUS is typically an ACF/UTAM4.2 entity, a DLUR is typically a router. A pair of LU 6.2 sessions is established between the DLUR and DLUS. These LU 6.2 sessions transport legacy SNA control messages. Those messages not recognized in an APPN environment are encapsulated on the LU 6.2 session. Messages then are de-encapsulated by DLUR and passed to the legacy SNA LU. The DLU session initiation is then passed to the DLUS and is processed by the DLUS as legacy traffic. The DLUS sends a message to the application host, and the application host sends the BIND. Finally, legacy SNA data flows natively with APPN traffic.
An IBM APPN connection network is a logical construct used to provide direct connectivity between APPN ENs without the configuration overhead of defining direct connections between every pair of ENs. In general, the process of creating a connection network starts when a LOCATE request is received from an EN.
A network node (NN) then is used to locate the destination specified in the LOCATE request. If the NN sees that the two ENs (source and destination) are attached to the same transport medium (such as Token Ring), a virtual node (VN) is used to connect the two end points and form a connection network. The NN defines the session path as a direct connection from EN1 to VN to EN2, and then traffic is permitted to flow.
A border node is an APPN entity that enables multiple APPN networks to interconnect. Presently, border nodes are implemented only in ACF/VTAM and OS/400. Border nodes are responsible for tying directories and topology databases together for connected networks and rebuilding BIND requests to show separate routes in each network.
With border nodes, topology and directory databases in NNs are reduced to sizes required for individual subnetworks rather than the composite network. In addition, cross-network sessions are routed through the border node. Figure 37-7 illustrates the position of border nodes (ACF/VTAM and OS/400 devices) in an multi-network APPN environment:

![]()
![]()
![]()
![]()
![]()
![]()
![]()
Posted: Thu Jun 17 16:20:28 PDT 1999
Copyright 1989-1999©Cisco Systems Inc.