Online Documentation Server
 ÏÎÈÑÊ
ods.com.ua Web
 ÊÀÒÅÃÎÐÈÈ
Home
Programming
Net technology
Unixes
Security
RFC, HOWTO
Web technology
Data bases
Other docs

 


 ÏÎÄÏÈÑÊÀ

 Î ÊÎÏÈÐÀÉÒÀÕ
Âñÿ ïðåäîñòàâëåííàÿ íà ýòîì ñåðâåðå èíôîðìàöèÿ ñîáðàíà íàìè èç ðàçíûõ èñòî÷íèêîâ. Åñëè Âàì êàæåòñÿ, ÷òî ïóáëèêàöèÿ êàêèõ-òî äîêóìåíòîâ íàðóøàåò ÷üè-ëèáî àâòîðñêèå ïðàâà, ñîîáùèòå íàì îá ýòîì.




Cisco Switched Internetworks

Chris Lewis

Backward Forward Chapter: | 1 | 2 | 3

Chapter 3

ATM Operation and WAN Switching

Objectives

Chapter 1 gave an overview of ATM, by introducing the fixed length cell as the foundation for ATM networking. ATM was defined as a connection oriented protocol, with an asynchronous form of multiplexing multiple traffic streams on to the one physical cable that could equally accommodate traffic with variable bandwidth requirements and fixed latency requirements. The basics of the User-Network Interface and the Network-Network Interface were also discussed and the operation of the Integrated Local Management Interface were also outlined.

This chapter covers the operation of ATM in a number of environments, including Classic IP Over ATM (CIOA), Next Hop Resolution Protocol (NHRP), ATM and frame relay inter working, LAN Emulation (LANE), Multi Protocol Over ATM (MPOA), and call setup procedures. We will also cover the concepts of Tag Switching as it relates to switching performed in the WAN.

Chapter 6 covers the device setups needed to implement all these technologies in real world networks.

Introduction to UNI Signaling

User-Network Interface (UNI) Signaling has simple goals, but the issues can get complicated in implementation. ATM UNI signaling procedures are there to support the dynamic provision of ATM connections within a single network. The important word here is dynamic. In technologies like frame relay, the signaling is simpler as the network is designed to provide static connections at the user interface, as represented by DLCI (Data Link Connection Identifier) numbers. Although traffic throughout the frame relay network may be dynamically re-routed, at the user interface, destinations are each presented with the same and permanent identifier. At each access point to a frame relay cloud, each DLCI (which have local significance only) represents a specific destination and always the same one, as represented in figure 3-1.

In ATM networking, the UNI is there to dynamically allocate connection IDs to connections that are continually being established and torn down. In fact as UNI IDs are re-used after the connection is terminated. An ID used for one connection at the UNI can lead to a totally different location when used for a different connection.

ATM does support Permanent Virtual Circuits (PVCs) as well as the Switched Virtual Circuits (SVCs) that require a UNI ID to be allocated each time a connection is established. PVCs however are usually setup via manual commands outside of the UNI’s automated processes.

UNI Signaling Procedures

Within the specifications, there are ATM endpoints (in our scenario that is typically a Catalyst) and both private and public ATM networks (an ATM network requires one or more switches, which equates to one or more Lightstream 1010 switches). With these entities, there are a number of possible interconnections via ATM links, which are listed below and shown graphically in figure 3-2.

End point to End point (Catalysts to Catalyst) uses UNI signaling

End point to either a public or private ATM network uses UNI signaling

Private ATM network to public ATM network uses UNI signaling

Private ATM network to private ATM network uses PNNI

Public ATM network to public ATM network uses B-ICI (Broadband Inter Carrier Interface), which is out of the scope of this book.

The UNI signaling that we will be discussing is based on the current revision, which is the ATM forum’s UNI 4.0, which is closely aligned to the ITU Q.2931 standard. These UNI specifications bear an uncanny resemblance to the Q.931 signaling of ISDN. This is a positive as the designers of ATM excel at sensibly making use of existing proven technology.

Specifically, these signaling processes are based on connection requests and responses. Common functions that ATM signaling deals with is the identification of call initiated, call proceeding and call released states. The whole signaling process is there to generate a connection and its identifier, thus bypassing the need to carry ATM addresses in cells used to transmit end user data. This is a good thing when you look at the size of ATM addresses, as illustrated in figure 3-3.

As a connection oriented protocol, connection request messages must be replied to with an acknowledgement from the destination prior to the commencement of a connection. The following two sections look at how point to point connections and point to multipoint connections are established.

point to Point ATM connections

The process described below is illustrated in figure 3-4. In the explanation of the call setup procedure below, the calling and called user equate to a Catalyst on the end of an ATM link and the network, refers to one or more ATM switches, like the Lightstream 1010.

· The calling user sends a call setup message to the network, which passes this request on to the called user.

· A call proceeding message is sent from the called user to the network, indicating that until the present call setup request is serviced, it will accept no more call requests.

· Assuming all is well, the call proceeding message is rapidly followed by a call connect message, sent from the called user to the network, which is passed back to the calling user. This indicates acceptance of the call by the called user.

· Finally, a connect acknowledge notifies the network and the called user that the setup has been completed and the transfer can begin.

Knowledge of this procedure is not essential to setup a simple ATM connection, but like knowing about the existence of a three way handshake within TCP is beneficial when designing and troubleshooting TCP/IP networks, so is knowledge of the ATM call setup process. Connection oriented protocols like TCP and ATM allow firewall and other security procedures to be put in place that allow connections to be originated from within the internal network, but not from an external network.

Point to Multipoint connections

At first glance, point to multipoint communication for a connection oriented protocol is difficult to perceive. The whole idea behind connection oriented protocols is that there is a call setup sequence that checks to see if the recipient is ready, willing and able to receive. What happens in a point to multipoint communication if only one of the intended recipients is unable to respond? Should no packets be sent? Or should only packets to those that are able to receive be transmitted? If so, given that the sender is sending to one multicast address, how will it know which of the potential recipients is unable to receive?

It was these sorts of issues that make TCP point to multipoint communications unworkable. Fortunately, the ATM design bypasses these problems and presents a framework that does allow point to multipoint communications with a connection oriented protocol. Point to multipoint communication in an ATM network is illustrated in figure 3-5.

As can be seen, one stream of traffic is sent from the sender across the ATM network. Once the stream has reached the location where multiple receivers of that stream are resident, multiple point to point connections are established to deliver the multipoint data. At first this may seem that it generates much more traffic than current multi or broadcast systems, but in fact that is not true. With current multicast systems, a packet is sent to a specific multicast address and all end stations registered to receive that multicast get it. In the days of 10Base2 LANs, it would be true that a multicast or broadcast packet would only travel once on the network cable for multiple end stations to receive it. However, as soon as we moved to 10BaseT and each end station had its own cable back to a hub, multicast or broadcast packets were generated for each end station receiving them. This is just as true with switches, one multicast or broadcast packet comes in to the switch and an identical multicast or broadcast packet is generated out of each interface that has an end station requiring that data. When viewed in that light it can be seen that the ATM method of dealing with multicast and broadcast packets is no less efficient than the methods we employ today.

As you can imagine, the call setup procedure is a little more complicated in the point to multipoint environment and there are some special names associated with the various parties involved in an ATM point to multipoint connection. Each of these terms is identified in figure 3-6.

These elements are defined below.

· Root is the sender.

· PMP Node is a switch in the path from sender to receiver that has just one incoming stream and one outgoing stream for the multipoint connection.

· Last Common Node (LCN) is the first location in the ATM network where one incoming stream leads to more than one outgoing stream for the mutlipoint connection.

· Single Party Node (SPN) is the node that connects the LCN to leaf nodes.

The LCN and the SPN may be the same device, which is the case in figure 3-5. Figure 3-6 shows a situation when the LCN and SPN are different.

To support point to multipoint, all leaf end stations need to be added to the connection, which can either be initiated by the root or the leaf stations themselves. One potential call setup sequence for a point to multipoint connection is illustrated in figure 3-7 and described below.

Let’s say the root is aware it needs to add an end station to receive the multicast (this will have come from a leaf end station, possibly as the result of the leaf setup request message). The first thing that happens is that an add party messsage is sent out by the root, and forwarded by any PMP’s in the path to the leaf nodes. Once the LCN for that multicast stream is reached, the add party call is changed to a regular point to point setup call, which is forwarded to the SPN (if any) and on to the leaf nodes. Both the SPN and leaf node will respond with call proceeding messages followed by connect messages. The LCN will pass the connect message back via any PMP’s in the connection to the root in the form of an add party acknowledgment. The LCN relays to the SPN and the SPN replies to the leaf node with a connection acknowledgment.

This rather complex process is performed only at the initiation (or cancellation) of a leaf node joining a multipoint connection. After the connection has been made, a multicast packet will be sent once by the root, and only replicated when it reaches the LCN, and potentially further replicated by any SPN’s in the delivery path to leaf nodes.

ATM Addresses

ATM addresses are what the network uses to identify devices and end stations within the network, similar to IP addresses within an IP network. As we showed in chapter 1, the cell used in ATM communications does not carry any ATM addresses. The only time ATM addresses are used on the network is during call setup. Once a call is setup, all transfers take place via the VPI/VCI number assigned for the duration of that connection.

For private ATM networks, ATM addresses use the 20 octet OSI format. When we come to configure an ATM network in chapter 6, we will be illustrating this 20 octet format for private networks. This private network addressing scheme splits the address in two, the initial domain part (IDP) and the domain specific part (DSP). The structure of ATM private network addresses is shown in figure 3-3.

The IDP contains the Authority and Format Identifier and the Initial Domain Identifier, which can be an International Code Designator (assigned by the British Standards Institute), a Data Country Code (assigned by the ISO) or an E.164 address (assigned by the ITU).

The DSP contains address information that is supplied by the domain administrator, plus an End System Identifier (commonly a 6 byte MAC address) and a one byte Selector field. For a private network, the domain administrator will be the network administrator.

These addresses are supplied to ATM end stations via ILMI from an attached switch that is the first hop in the ATM network. As previously discussed, the cell has no address information and carries data between source and destination purely by the VPI/VCI number assigned for the duration of the established connection.

NNI Signaling

This covers two main areas, first ATM routing which discusses how ATM networks choose the route that traffic will take through an ATM network Second, the signaling used between networks when establishing connections.

ATM routing has to provide the same functions that are found in IP routing, there has to be some database of address locations, there has to be a way to select the best route for a connection, and the network must react to topology changes like link downs.

The ATM forum PNNI protocol is an open protocol that will be implemented by all the major ATM switch manufacturers. The lure of PNNI is that it will allow switches from different vendors to inter-operate, much like using OSPF will allow Cisco and non-Cisco routers to share topology and routing information whereas using IGRP restricts you to Cisco equipment only. PNNI is in fact based on OSPF link state routing principles.

The basis of PNNI deployment, and the key to it being able to support very large ATM networks, is its ability to logically divide the network in to interconnected multilevel peer groups as illustrated in figure 3-8.

The scalability comes from route aggregation, by assigning some form of common addressing to each group and containing that uniqueness to be within the group. This is the same concept as CIDR (Classless Inter Domain Routing) or the route aggregation property of OSPF.

IP route aggregation reduces the number of entries a router needs to have in its routing table in a subnetted network. Supposing a router receives advertisements for 10 consecutive remote subnets, that are all reachable via the same next hop router. Instead of placing 10 entries in the routing table, route aggregation allows the router to place just one entry that points to all ten subnets. CIDR takes this concept a step further, by allowing a router to, for example, place one entry in its routing table for all class C networks that have 194 as the first octet. This of course only works when all the 194.x.x.x class C networks are reachable via the same next hop router and hence requires all the 194 networks to be located in the same area of the network.

Referring to figure 3-8, each of the address areas, A through G will each have its own common prefix within the ATM address scheme. This reduces the number of entries each ATM device needs to keep in its routing table to be able to communicate with all other devices.

Essentially the routing side of PNNI makes use of link state routing, based on a hierarchical implementation and source routing to select specific routes for connections. Link state routing was preferred to distance vector routing due to its better scalability. Source routing enables a switch to define the intermediate nodes to pass through so that there is no possibility of a connection request getting caught in a loop. Additionally, source routing helps with ensuring connections travel through the nodes that have committed to provide them with the requested quality of service.

Constructing ATM Address Hierarchy

The key to understanding ATM routing tables (referred to as a topology database) and address hierarchy is that the ATM address structure as outlined in figure 3-3, consists of a 13 byte network prefix and a 7 byte end station specific ID (the 6 byte ESI and one byte Selector). An end station constructs its address from the ESI and SEL (which it knows) and the network prefix that it gets from a switch. The ESI is generally a MAC address for LAN attached end stations and the Selector byte generally relates to the sub-interface in use on the end station.

Just as in IP networking there was a netmask that defined hierarchy within a given network address, ATM addresses use a level indicator, which can range in length from 0 to 13 octets (0 to 104 bits). Each bit represents a level, therefore there are a possible 105 levels within PNNI.

To discuss this area fully and to illustrate the concepts with a few examples, we need to define a few terms. The first term is a peer group, which is simply a group of nodes that are at the same level in the hierarchy, such as nodes 1 and 2 in area A in figure 3-8. This group will be identified by a combination of the prefix (the first 13 bytes of the address) and the level, which will define how much of the prefix is reserved for use by this group. This level indicator is one byte in length, ranging from 0 to104 in value and is manually configured on the switch .All members of the peer group share the same topology database and hence have the same view of the network.

Once a peer group is established, a peer group leader is elected, somewhat similar to a Designated Router in an OSPF area (OSPF is discussed more fully in the Cisco TCP/IP Routing Professional Reference). The peer group leader is elected based on priority, with the ATM address acting as a tie-break and is responsible for communicating route information to the rest of the ATM network. Typically the peer group leader will summarize topology information from groups lower in the hierarchy and pass that information up, and pass information from higher in the hierarchy down. Peer group leaders are also known as logical group nodes (LGN’s).

Note that there is a Routing Control Channel on VPI=0, VCI=18 that is used for nodes to exchange routing information on.

In chapter 6 we will go through assigning ATM addresses, prefix level identifiers and looking at topology databases on lightstream equipment. For now though, we need to be sure we understand how this works on theoretical ATM addresses.

Take the ATM address 11223344556677889900, which has the expected 20 bytes (each character which represents 8 bits of data, which is one byte). Let’s say that for our network we want to define the first 10 bytes as the network prefix. To achieve this, we have to give the level indicator a value of 80. This is a similar concept to setting a netmask to 255.255.255.0 for a class B network address to split the available network address space between subnets and host portions. In the ATM world, we are assigning the first 10 bytes to identify this peer group, with the remaining 3 bytes of the network prefix for group lower down in the hierarchy.

So the following illustrates the effects we have been discussing

ATM address 1123344556677889900

Level Indicator 72

Peer group identifier 112233445XXXX

The peer group identifier is always 13 bytes long (the length of the network prefix), the level indicator however, tells us that the last four bytes can be of any value and we will still reach the destination ATM address via this peer group. Whether the destination ATM address is within this peer group, or one lower in the hierarchy below this peer group depends on the value of the last four digits.

Just as with netmasks in IP there were acceptable values for the netmask (255.255.255.240, 255.255.255.224, 255.255.255.192 etc.) based what the netmask represented in binary, so there are acceptable values for the level indicator. The level indicator indicates the number of bytes in the network prefix that are associated with peers at a particular level within the hierarchy and therefore have to be a multiple of 8.

Clearly the higher the value of the level indicator, the lower the peer group identified is in the hierarchy. Put another way, the higher the value of the level indicator, the more specific is the location in the hierarchy is that you are specifying. This may be clearer with a look at figure 3-9.

This figure equates a netmask in a link state routed network, to the level indicator in an ATM network. It is important to use a link state routed network for this comparison, as regular distance vector routing protocols like RIP and IGRP do not support Variable Length Subnet Masks (VLSM). If you are unfamiliar with VLSM, please refer to the Cisco TCP/IP Routing Professional Reference.

In figure 3-9, the different value for netmask within the IP network identifies subnetworks at different levels within the address hierarchy. This type of hierarchy would allow a router in a different class B network to have one entry in its routing table to direct all traffic to any end station within the 172.8.0.0 class B address space. Routers within the 172.8.0.0 address space however, would have many entries for all the 172.8.0.0 subnets listed in their routing tables.

The level indicator performs a similar function in the ATM network in figure 3-9. Here we are looking at the 13 byte network prefix and the initial level 56 indicator identifies all the network prefixes that start with 1234567, as belonging to the same level in the hierarchy. The level 72 indicator looks at the first 9 bytes and will therefore use 123456789 to place network prefixes in the hierarchy. As you can see, the higher the value of the level indicator, the more specific (or lower down) we are identifying network prefix in the ATM hierarchy. Just as a router in the top level of the IP network in figure 3-9 can advertise reachability to all the 172.8.0.0 network to routers in other class B networks, an ATM switch in the top level of the ATM network in figure 3-9 can advertise reachability to all network prefixes that start with 1234567 to other ATM switches.

Whereas a router’s view of the network was contained within the routing table, an ATM device’s view is kept within a topology database. Before we look at what one of those topology databases looks like in a Lightstream 1010, we need to discuss just a few more concepts.

ATM Topology Database Updates

Just as OSPF uses the hello protocol for neighbor discovery and as a keepalive, so does PNNI within ATM. Hello runs on all active interfaces on an ATM switch and is used just to communicate with directly connected neighbors, even if there are multiple links to that neighbor. Once the hello protocol is exchanged, adjacent nodes will determine if they belong to a common peer group. If they do, they will synchronize their topology databases. Unlike link state routing protocols, the contents of the node’s view of the network (the routing table or topology database) is not sent unless the content of that view has changed. Hello packets do not regularly advertise topology database information, that is left to other mechanisms.

PNNI Packets

As well as the Hello packet type, PNNI supports the PNNI Topology State Packet (PTSP), the PNNI Topology State Element (PTSE), Database Request and PTSE request. In essence a PTSP contains multiple PTSE’s. Consider a PTSE as the basic element of data within a PNNI network. Each PTSE is itself a self contained piece of data that contains reachable ATM addresses, horizontal and uplink information and link resource utilization.

Database summary packets are used during the initial connection of peers to synchronize the peers’ topology databases. Initially, one peer does not send all its topology information to the other, all that is exchanged is summary information. The receiving peer uses specific PTSE requests to get the full information on PTSE’s that it does not know about.

Convergence within an ATM peer group (note that the goal of PNNI is to converge topology databases within a peer group, not the whole network, which is one of the features that enables ATM to scale so well) is achieved through a flooding technique. After two nodes synchronize, either node that has had to update its PTSE information with newer data as a result of the database synchronization will flood those newer PTSE’s to all its neighbors within the peer group. The speed of convergence is therefore dependent on the number of nodes within a peer group. It is expected that the most common maximum number of nodes with a peer group will be 50 (this refers to switching nodes, not end stations).

PNNI Topology Databases

Now we can take a preview at the PNNI topology database, an example of which can be displayed on a Lightstream 1010 by issuing the command show atm pnni database detail as shown in figure 3-10. This figure leaves out some of the display for clarity at this stage.

The important points to note from the display at this stage are that:

· Each PTSE is sequentially numbered by the node (that is PTSE’s are identified starting with number 1 onwards).

· ATM addresses are represented by 40 digit numbers, two digits per byte of the 20 byte address and each PTSE lists the level indicator (in this case 56). Previously in this text we had been writing ATM addresses with 20 digits, that was somewhat of a shorthand notation, used for convenience. The real world convention is to represent each four bits with one hexadecimal character.

· Internal and external addresses (with respect to the peer group) are identified via different PTSE types, as are horizontal links if any. Horizontal links are links between peer groups that are at the same level in the address hierarchy.

· Node information, such as the ATM node address. It is worth noting that the network prefix is associated with the switch, the ESI identifies the individual interface and the SEL byte identifies the sub interface, if any. In this respect, ATM addressing mimics IP addressing in that it is an address of an interface, rather than an address of a switch as a whole.

In IP systems that used OSPF as the routing protocol, topology databases were formed as the basis for selecting routes to enter in to the routing table. The topology database here is used for the initiation of connections that will have a VPI/VCI number, leading to the desired ATM address. Connections are initiated through PNNI signaling that uses a form of source routing to define the connection path through the network. The end station initiating the connection will compute the source route information based on the available PTSEs in the topology database.

Next we will look at the PNNI signaling used to establish connections and connection identifiers between PNNI nodes and networks.

PNNI Signaling

PNNI signaling is used to setup connections between two private ATM networks or two private ATM network nodes. The call setup computation has two elements. First the route (in terms of nodes and links) to use to get from source to destination. Second, each node along the route must commit to provide the necessary type and quality of service requested by the connection.

Perspective is required here to realize that connections are only established as the result of two hosts needing to communicate. Given that connections are initiated from hosts, PNNI connections that carry user data are always initiated as the result of a UNI connection request. The first stage of PNNI connection setup is that the route is selected, which leads to the creation of Designated Transit Lists (DTL). This is the ATM networking name for a source route descriptor that lists the nodes and links to use from source to destination. Unlike IP, where the return path could be very different to the outbound path, ATM connections use a symmetric approach in that both directions of the connection are serviced through the same route. A DTL only has significance within the peer group in which it was created. As the connection setup progresses from peer group to peer group, a new DTL is computed that takes the connection through the current peer group to the next peer group in sequence. This process is illustrated in figure 3-11.

In figure 3-11, host A wants to contact host B. When host A sends data to Catalyst 1, which is destined for host B, Catalyst 1 initiates a connection with LS1 in peer group 1 via UNI signaling, requesting a connection to catalyst 2. Once LS1 knows that a connection to Catalyst 2 is required, it will examine its topology database. From this it will determine that LS6 will provide access to the destination Catalyst and that LS6 is in peer group 2. Ls1 will also find from its topology database that LS3 is the border node for peer group 1. At this stage a DTL is generated for the peer group that lists LS1>LS3, as well as a next level DTL, that lists PG1>PG2. When LS5 in peer group 2 gets the connection setup request, it realizes it is the final destination peer group and sets up the DTL to be LS5>LS6.

In the event of link or other network component failure, each border node saves the DTLs so that an alternate path can be selected should that be necessary. In PNNI, this is referred to as crankback.

The reason that crankback is of use, is that the DTL (i.e. the selected path through the ATM network from source to destination), based on the topology database that is common to all peer group members, is calculated at the time of connection initiation. There is the potential that an event occurs elsewhere within the peer group and a node defines the DTL prior to having its topology database synchronized with other members of the peer group. This may mean that the path selected may not have the resources available to meet the demands of the connection. If this occurs, the originating node will try to find an alternate path. If that is not possible, the call is returned back to the UNI with an indication that the call setup was unsuccessful.

The benefit of using the DTL for the crank back process is that the call does not have to go all the way back to the device originating the connection, it merely has to go back to the source of the DTL for that peer group.

Assuming that there is physical connectivity between the source and destination at the time of call setup, a good DTL will be generated. That is stage one, stage two is to check that the selected route has the required resources to meet the demands of the connection. ATM can support the following types of service

· Constant Bit Rate

· Real-time variable bit rate

· Non-real time variable bit rate

· Unspecified bit rate

· Available bit rate

The call setup procedure does not specifically state that it wishes to set up one of these types of services specifically. Instead, values for specific Information Elements (IEs) are set that relate to one of these types of service. These are IEs are based on the type of carrier, a traffic descriptor and QoS requirements.

ATM And Frame Relay

It is unlikely that end to end ATM networks will form much of the corporate network environment for many years, if ever. However, ATM does have a place in the WAN, and is already popular when used as a corporate WAN backbone with frame relay used to connect to branch sites. This is illustrated in figure 3-12.

The inter working function (IWF) is responsible for encapsulating Frame relay packets in AAL5, then cutting these up in to ATM cells for transmission over the ATM backbone. The reverse is true for communication in the other direction. There are generally two generic forms of inter working, one being encapsulation (as used here) and the other being translation (as used in some ethernet to FDDI switches).

Frame Relay to ATM Connectivity Issues

It is not worth going through a full explanation of Frame Relay here, the Cisco TCP/IP Routing Professional Reference covered frame relay implementations on Cisco equipment in some detail. What is worth considering is how ATM and Frame Relay should operate, given that in the environment described above, both will be used to transport information from source to destination. However, we will review the relevant features of frame relay here.

The ITU (International Telecommunications Union) and the ATM Forum have defined two types of interworking for ATM and Frame Relay, which are Network Interworking and Service Interworking.

Frame Relay Review

Frame Relay basically sets up Permanent Virtual Circuits (PVCs) that are identified as DLCI numbers to a device connecting to the frame relay cloud. Frame Relay works on the basis of guaranteeing a certain amount of throughput, and making more available to an end station if the network can support it.

The key elements of this process are the CIR (Committed Information Rate), DE (Discard Eligible), FECN (Forward Explicit Congestion Notification), BECN (Backward Explicit Congestion Notification), Bc (Committed Burst Size), Be (Excess Burst Size) and the Committed Time Interval.

The most common specification network engineers have to make when buying a frame relay connection from a carrier is to size the CIR. Typically a frame relay connection will be bought with something like a 128K link, having a CIR of 64Kbit/sec or so. The concept is that the purchaser will always be able to get at least 64 Kbit/sec and occasionally something more.

Whether this whole concept is real or not is a matter of some debate. It is extremely difficult for frame relay providers to monitor in real time the bandwidth utilization of each client and guarantee that they will each get their CIR particularly during periods of severe congestion. What is more usual now is that the CIR is guaranteed during "normal" operation and as an average for the committed time interval (discussed next).

The second important service specification relates to the Bc and Be parameters. The Bc defines the number of bits that can be sent during a committed time interval, based on the CIR. In reality, the Bc merely states that for a specific time, you will get the maximum throughput your CIR allows. For example, if the Bc is 384K, with a CIR of 64Kbit/sec, you get 6 (calculated by 382/64) seconds at Bc guaranteed (the 6 second value is the committed time interval). The Be, is a separate number of bits, which specifies the maximum amount of uncommitted data in excess of Bc that the frame relay network can attempt to deliver during the committed time interval.

In practice this means that you are only guaranteed the CIR as an average over the length of your committed time interval, and that you may burst above that, but that burst data may not get through. So with a committed time interval of 6 seconds, for any 6 second time interval, you may get 30Kbit/sec for the first two seconds, followed by higher than 64Kbit/sec for the next four seconds so that over 6 seconds you average 64Kbit/sec throughput, leaving the CIR in tact.

The primary mechanism that leads to frames being discarded on a frame relay network during times of heavy load is the DE (Discard Eligible bit). This is normally set on the frame relay network access device for specific types of traffic, so one could define all web traffic discard eligible, but not in-house application traffic. In addition, frames in the Be size range are normally marked as DE.

The DE bit really comes in to play when FECN and BECN start to kick in. The FECN bit is set by a switch in the frame relay cloud that is experiencing congestion. This FECN bit is set in packets sent in the direction of the data flow to the recipient of the data. Once the recipient of the data flow receives FECN bits, it sets a BECN bit in packets going back to the source of the data telling it to back off. If congestion continues even with the setting of FECN and BECN bits, packets marked with the DE bit will be discarded first. If congestion still persists, regular packets will be dropped also. This is illustrated in figure 3-13.

This is all very unpalatable to the ATM world, which is geared toward producing guaranteed end to end delivery. Next we’ll look at Network Interworking and Service Interworking that have to mesh together these disparate technologies.

Network Inter Working

The network inter working is there to enable a frame relay end station to communicate with an ATM end station. In this model, ATM is really just used as a transport mechanism, with the real data being carried within frames. Of course the frames are encapsulated within AAL5 and segmented in to cells for transmission over the ATM network, but the receiving end station still has to decode frames. This requires the receiving ATM end station to be able to understand frames. Essentially, the Inter Working Function and ATM end station has frame relay protocols running on top of the ATM and ATM segmentation/re-assembly layer.

The detail of this operation is contained within the ITU-T recommendation I.555 and the ATM forum implementation FRF.5. These specifications mandate the use of AAL5 adaptation, the mapping of a DLCI to one VPI/VCI ATM connection, mapping DE to ATM Cell Loss Priority and mapping of the FECN/BECN to the ATM EFCI congestion field.

Service Inter Working

This is a more complete function than network inter-working. In Service Inter Working neither the frame relay end station, nor the ATM end station knows anything about being connected to an end station of the other persuasion. In this case the Service Inter Working Function handles all conversions between the two technologies, delivering full translation between the protocols. The following outlines the main features of service inter working.

· Cell Loss Priority (CLP) and DE are translated. For traffic travelling from Frame Relay to ATM, the DE is either mapped to CLP if present, or all cells are given a pre-set CLP value.

· Congestion is again handled by a FECN to EFCI mapping and vice versa in the reverse direction.

· DLCIs are mapped one to one to VPI/VCI numbers.

As can be seen, Service Inter Working is basically bi-directional protocol translation performed within the ATM network, whereas Network Inter Working is frame relay frames encapsulated within ATM and requires ATM end stations to reassemble frame relay frames from cells and pass those frames up to higher protocol layers.

Frame Based UNI (FUNI) and DXI

For either of the Inter Working Functions described above, the frame relay end station knows nothing of ATM communications. With Frame Based UNI (FUNI) however, a frame based end station may make use of an ATM network via the regular ATM UNI interface, as a FUNI enabled end system is ATM aware. Although a FUNI end system uses frames instead of cells, it is still able to access the key ATM-UNI functions. A FUNI end station can assign VPI/VCIs and make use of UNI signaling, but has limited address range and does not support all types of ATM traffic(no CBR or ABR types).

Within the FUNI header, 10 bits are used for the frame address, 4 bits for the VPI and 6 bits for the VCI. The FUNI frame does carry both a Congestion Notification field, which is used to relay the ATM EFCI bit value and the usual CLP, which is copied from the FUNI frame to the ATM cell directly.

A similar interface is the Data Exchange Interface, or DXI for short. The DXI interface specifies the connection between a DTE, such as a Cisco router serial interface and a circuit termination device, such as an ATMl DSU. The benefit is that the router interface can send DXI format variable length packets at slower speeds, such as T-1 speeds, then the ATM DSU can cut up the frames in to cells.

The reason this is beneficial for slower speed links is the percentage protocol overhead in variable length frames compared to fixed length cells. With ATM cells, the header is 9.4% of the data transmitted, which is OK when you have plenty of available bandwidth, but on the slower links, bandwidth is more of a premium and the cell overhead is too great. With variable length frames, the header overhead is generally far less. DXI and FUNI and contrasted in figure 3-14.

DXI is currently more widely deployed than FUNI, as the latter is a newer specification and there are far more DXI capable devices available. As ATM becomes more widely deployed, it is expected that FUNI will replace DXI.

ATM and IP Integration

The most popular network layer protocol of the present time, and for the immediate future is the TCP/IP suite of protocols. When ATM networks are deployed, the likelihood is that they will have to transport IP datagrams. We then have to consider how we mesh these two technologies that have considerable overlap and in many ways, opposing design goals. As an example, whose routing mechanism do we use, IP with OSPF and IGRP protocols or ATM with its own OSPF derived mechanism? Also, how do we support the ARP (Address Resolution Protocol) of IP across an ATM network? There are various solutions to all the issues that face using both IP and ATM within the same network. Some are simple, some are complex. In the next few sections we’ll cover the theory necessary to understanding these options and in chapter 6, we’ll show the Cisco device configurations necessary to support them.

Review of IP Over ATM Issues

We have already hinted at some of the issues with running TCP/IP and ATM together. The problems exist as the protocols overlap in what they are trying to do. The picture is complicated further when you consider that the IP model is connectionless, whereas ATM is connection oriented. True, TCP is a connection oriented protocol, but establishing TCP connections relies on the IP layer, which is connectionless and allow it to utilize broadcast mechanisms like ARP (Address Resolution Protocol) to locate host addresses. ATM is a Non Broadcast Multiple Access network like Frame Relay (the operation of NBMA networks was discussed in the Cisco TCP/IP Routing Professional Reference). As such, one of the key integration issues is how to provide support for these connectionless broadcast reliant mechanisms within the connection oriented ATM network.

We also have the opportunity to utilize ATM’s ability to establish direct connections between devices that are on different IP subnets, without recourse to a router.

The most common way of integrating IP and ATM is termed Classic IP Over ATM (CIOA) and maintains the IP rules of communicating between subnets via a router. This method reduces ATM to a link layer protocol (albeit a potentially very fast one) and does not enable end stations to take advantage of the sophisticated features of ATM as a network protocol. We will look at CIOA next.

Classic IP Over ATM

CIOA is formally defined in RFC 1577 and was developed by the IETF so that ATM technology could be introduced to existing routed networks with as little disruption to the existing operation as possible. Proceeding with the CIOA approach enables network engineers to gain familiarity with ATM operation and equipment, without changing their view of their network too dramatically. Once familiarity with CIOA is gained, operations staff can move on to look at utilizing enhancements like the Next Hop Resolution Protocol (to be discussed next) and gradually migrate to using more and more ATM facilities.

As we have said, CIOA uses ATM as a link layer protocol within the overall IP communications scheme, however, this is done for unicast (i.e. point to point) communications only. As such, hosts on the subnet are allocated an IP address and an ATM address. Therefore, when a host communicates with another host on the same subnet (as illustrated in figure 3-15), an address resolution has to be performed to resolve the destination IP address to a destination ATM address. With this communications method, the router is making all the routing and filter/security decisions based on the IP rules of communication.

To discuss CIOA further we need to introduce a new term, the LIS, which stands for the Logical IP Subnet (LIS). All this refers to is the collection of ATM interfaces that are assigned to the same IP subnet in the network and communicate with each other directly using ATM. In figure 3-15, examples are interfaces A, B and C. The other key points to note on figure 3-15 are that each router has a physical connection in to two subnets and each subnet has its own ATMARP function. In this mode, if host 3 wishes to contact host 1, the path for communication routes via R3 and R2, even though there is a direct ATM path from host 3 to host 1.

There are two types of virtual circuits within ATM, a switched virtual circuit (SVC) and a permanent virtual circuit (PVC). The ATMARP service is used for SVC connections, whereas Inverse ATMARP is used for PVC connections.

SVC operation with ATMARP operates in much the same way that ARP does in a LAN environment. In the LAN environment, if an IP end station wants to contact another IP end station within the same subnet that it does not know the MAC addess of, a broadcast ARP packet is issued by the first IP end station, asking for the end station with the desired IP address to reply with its MAC address. The end station that possesses the destination IP address picks up the broadcast, replies with its MAC address, and then connectionless point to point unicasts can proceed on the LAN between the source and destination end station.

The only difference between how this operates and how ATMARP operates when establishing an SVC connection is that instead of the source node issuing an ARP broadcast, it makes a request of the ATMARP server for the ATM address of the destination IP node. This assumes of course that the ATMARP server has a complete map of ATM to IP address for all nodes within that LIS. The ATMARP server may be defined on a router and not need a separate device to support this function.

In multicast environments, the ATMARP principle is extended in a MARS, a Multicast Address Resolution Server. The most significant difference between a MARS and an ATMARP server is that the MARS server contains a table which maps IP group addresses (the class D multicast addresses) to a list of ATM addresses that consist of the devices registered for each multicast group. MARS is discussed further in chapter 6.

There is no mechanism by which a client (such as host1 in figure 3-15 can dynamically learn the ATMARP server address, each device in the LIS needs to be manually configured with the address of the ATMARP server. When each client starts up, it will use the pre-configured ATMARP server address to signal for a connection. The ATMARP server then uses Inverse ARP to obtain the ATM/IP server address pair from the new client and inserts that pair in its server table. Each client will send periodic updates to the ATMARP server to refresh its address pair in the server tables.

So, with CIOA, we are using ATM as a potentially high speed link layer protocol, with the classic IP routing between subnets in tact. To do this, routers need to connect to the ATM network using something like an ATM Interface Processor to support the higher speeds of the ATM network. With this model the ATM network looks like just another interface on the router that provides point to point communication.

In a PVC network, the Connection across the ATM network is already defined, and what is needed is to determine the protocol (IP) address that exists at the other end of that PVC. This is essentially how frame relay operates with the DLCI providing a local connection identifier for access to a specific IP address. With a PVC network, the client does not need to contact an ATMARP server, all it does is issue an inverse ATRMARP request down each PVC to obtain the IP address of the remote connection.

CIOA does provide access to the higher transmission speeds of ATM and may be useful for a small campus network, possibly where high resolution video images need to be transferred between workgroups in separate buildings. With a small number of subnets, this mode of operation can provide the benefits of ATM transmission speeds, without network operations staff facing a whole-scale change in network operation. It does however have problems retaining optimal operation and scaling to a network that contains large numbers of hosts and subnets. It is clear to see even from the small network in figure 3-15, that as host 3 is directly connected to the ATM network, it would be advantageous to take advantage of ATM’s ability to connect directly to host 1 should it wish to, rather than be forced to route through router 3 and router 1. The mechanism developed to enable this type of communication is the Next Hop Resolution Protocol (NHRP) and is the next stage in IP/ATM integration. The key point about NHRP is that it breaks the normal rules of communication between IP subnets, by allowing direct communication between devices that are not on the same subnet.

Bookmark

NHRP, The Next Hop Resolution Protocol

The key difference between CIOA and NHRP networks is the method used to resolve a destination IP address to an ATM address. In CIOA, the ATMARP server delivers ATM addresses for hosts within one subnet. NHRP however, allows hosts to get destination ATM addresses across multiple subnets. As such NHRP is an address resolution protocol, rather than another form of routing protocol. However, as the ATM address resolved for the next hop is often outside of the IP subnet of the originating host, the end stations need modified stacks to aork with NHRP.

As NHRP allows address resolution across multiple subnets, it can be considered a superset of ATMARP. The network we will use to describe NHRP is illustrated in figure 3-16.

At first glance, it may appear that all we have done is replace the ATMARP server per LIS, with a Next Hop Server (NHS) process per LIS. OK, that is all that has changed in physical terms, but the operation of the NHS is radically different to the ATMARP server. Whereas the ATMARP server provided ATM addresses for IP interfaces within the one subnet, the NHS obtains the ATM address of the final destination, not just the next hop router.

An NHS has two modes of operation, first is server mode, which supports manual configuration of ATM-IP address pairs from remote subnets. The second is called Fabric Mode. Fabric mode enables an NHS to retrieve information about remote subnets automatically, by querying routing tables generated by inter and intra domain routing protocols (like IGRP and BGP respectively).

The following describes how the NHS (Next Hop Server) works in figure 3-16. Supposing host 1 wants to contact host 3, the shortest route is for router 1 to establish a direct connection with router 3, however, router 1 does not have an interface connected to LIS3. So router 1 makes a request to NHS1 for the ATM address of the next hop machine to get to router 3. If NHS 1 recognizes the destination IP address of router 3, the corresponding ATM address is returned to router 1 immediately. If the address pair is unknown, NHS1 will consult other NHS servers, via the NHRP request/reply path shown, who will pass the request on until a NHS with the required information is found. The information is passed back along the path it came from, so that all NHS’s on the path can update their tables with the new information. Once router 1 has the destination ATM address of router 3, a direct connection is established as shown.

So we have moved from using ATM as just a link layer protocol with CIOA, to making use of ATM’s ability to cut across subnet boundaries with NHRP. The next level of using ATM within a network is LAN Emulation to generate emulated LANs across an ATM switching fabric.

LANE

LANE was designed to connect remote LANs together over an ATM backbone. A significant advantage of this approach is that LAN applications can run unchanged over a LANE connection and have no idea they are running over ATM. With several hundreds of megabits per second available, it is possible to emulate full LAN bandwidth and connectivity across the wide area. The wide area links that use LANE to extend LANs across the wide area are termed Emulated LANs, and provide LAN services for both Ethernet and Token Ring. It should be noted that LANE does not support connection of an ethernet to a token ring segment, or any support for FDDI. Additionally, an end station equipped with an ATM NIC can become a member of either a token ring or ethernet emulated LAN.

LANE operates exclusively at the data link layer and as such, ATM stations on the LANE connection appear as if they were on the same LAN as the associated ethernet LANs on the ends of the LANE connection. The emulated LAN that extends across the ATM network and the two LANs it joins together all form one subnet and one broadcast domain, in effect as if they were all part of the same VLAN. As with VLANs, it is necessary to connect via a router for two ELANs to communicate.

Compared to CIOA or even NHRP, LANE is complex. Understanding this technology is further hindered by the arcane and overlapping terminology used in the specifications. So, to start with, we will define the terms in order that LANE’s operation can be discussed.

LANE Terminology

LEC: This is the LAN Emulation client, a LANE end station where the LANE protocols run. Most commonly this is implemented in a Catalyst LANE module that is optimized for that purpose. The LANE client connects directly to the ATM switch, which in the Cisco environment is a Lightstream 1010.

LECS: This term does not refer to a collection of LAN Emulation Clients, that term is LECs, the plural of LEC. LECS refers to the LAN Emulation Configuration Server, which is a server that contains configuration information for several ELANs. One LECS per domain, or collection of ELANs, has to be configured. The sort of information a LECS contains is the ATM address of the LES.

LES: This is the LAN Emulation Server, which provides address resolution services within the LANE environment. The resolution is supplied to an end station that knows the MAC address of the device it wishes to contact, but does not know the ATM address. This service uses LE-ARP, LAN Emulation ARP, which is different to IP ARP. IP ARP resolves IP (layer 3) addresses to MAC (layer 2) addresses. LE-ARP resolves MAC to ATM addresses, which in this model are both at layer 2.

BUS: The Broadcast and Unknown Server is used when a LEC wants to send a broadcast or contact an as yet unknown ATM address. The BUS handles sending data to multiple locations, thus emulating the effect of broadcasts. In an emulated LAN environment, the LES and BUS functions are mostly implemented on one unit, referred to as the LES/BUS, which is configured on one Catalyst.

Overview of LANE Communications

Figure 3-17 illustrates what we are trying to achieve with the LANE protocols, in terms of trying to extend an ethernet or token ring LAN across ATM. Each ELAN is considered a separate broadcast domain and in the Cisco implementation each ELAN is referred to by name rather than number in device configurations. ELANs are connected to each other via router functions just as VLANs were and as such LANE conforms to the normal IP subnet rules.

Figure 3-17 shows both an emulated ethernet and token ring being implemented across an ATM network. An ATM network consists of ATM switches, such as the LightStream 1010, represented by LS1 through 4. The device that accesses the ATM network from the local ethernet or token ring segment is generally a Catalyst, or a router ATM Interface Processor in the Cisco environment. So in figure 3-17, Cat 1 uses its fast ethernet interface to connect to local stations, and the LANE module for transmission through the ATM network across the emulated ethernet LAN. There is one LECS per LANE domain, which is implemented in Cat 4 and each emulated LAN has its own LES/BUS. As can be seen, each Catalyst runs its own LAN Emulation Client software to communicate with the ATM network. In fact, if there were several VLANs on Cat1, each would need its own LEC configured for it to communicate across a VLAN.

As stated initially, LAN applications do not know that they are using ATM and the whole business of LANE communication connections is hidden from them. So, for host 1 to communicate with host 2, the first hop is to Cat1. We now have to work out how Cat1 will forward data to Cat2 for the data to reach host 2. At this stage, host1 thinks that host 2 is on the same LAN, and therefore sends out the ethernet packet with the destination IP and MAC address of host 2.

Cat1 will be configured to associate the VLAN that host 1 is on with the appropriate ELAN to get the packet to the destination LAN. Through the LANE processes that we will describe, Cat1 will setup a direct connection with Cat2 and send the data to Cat2, which will deliver the packet to host2.

The exact process for the Cat1 to Cat2 connection to be made across LANE is a lengthy and complex one, so it may help to break this down in to two phases. First we’ll look at achieving the initial state for Cat1, whereby all the configuration information is available to the devices that need it and control connections are established. Second, is using the LANE facilities to establish the data transmission connection between Cat1 and Cat2.

For the first phase of reaching the initialized state, lets summarize the main points of what is going to happen before considering them in detail.

· LEC (in this case Cat1) Connects to LECS and finds the LES address.

· LEC joins ELAN

· MAC address to ATM address map is registered with the LES.

· A connection to the BUS is established to support broadcast and multicast.

The first thing a LEC (in this instance Cat1) does when it comes up on the ATM network, is contact the LECS to find out the address of its LES. Once the LES has been found, the LEC uses the LES to resolve all MAC addresses to ATM addresses through LAN Emulation ARP (LE_ARP).

The first question is how does the LEC know the address of the LECS? There are several options. Given that the attached Lightstream has the address of the LECS for that LANE domain configured, each attached Catalyst can use ILMI to retrieve the LECS address, or connection 0/17 (VPI=0, VCI=17) which will initiate a connection directly to the LECS. Failing that, the LEC will try a well-known address for the LECS, which is kind of a default ATM address for the LECS that all LECs know about. Of course, your LECS will have to be configured to have this specific address (4700790000000000000000000000A03E00000100). Cisco provide a proprietary option also, which is to configure the LECS address directly in to the LEC.

If we assume that one of the methods above works for the LEC to contact the LECS, the LEC will get the address of the LES for its ELAN and connect to it. This SVC once established is referred to as the configure direct VCC (Virtual Circuit Connection). The LEC will then move on to the join ELAN phase where the LEC registers with the LES. As part of this join phase, the LEC may register one or more MAC addresses and the corresponding ATM address with the LES. In the case where the LEC registers multiple MAC addresses, it is registering MAC addresses on behalf of attached end stations. This would occur if Cat 1 were registering MAC addresses with the LES for host 1 and host 5 in figure 3-17. When the addresses of host 1 and host 5 are registered with the LES, they are registered as being via a proxy. That is, the LES knows that to get to them it has to connect via Cat 1.

Next, the LEC will send an LE-ARP request to the LES, to obtain the ATM address of the BUS (in Cisco, this will be the same address as the LES). Once the BUS address is determined, the LEC establishes a connection with the BUS (known as the multicast send VCC), which connects the newly joined LEC to the point to multi-point broadcast channel (the multicast forward VCC), thus emulating a LAN’s broadcast capabilities.

The effect is that the ATM ELAN looks like an extension to the ethernet LAN to the hosts on the physical ethernet network. In essence, host 1 and host 2 will be configured for the same IP subnet. Figure 3-18 shows how VLANs and ATM ELANs are connected on a LANE network.

We are now ready to move on to the second phase, which consists of Cat1 establishing an ATM connection to Cat2, to support host 1 communicating with host 2. Given that host 1 knows the MAC address of host2, the job of Cat1 is to get the ATM address that corresponds to this MAC address and setup a connection to support data transfer between Cat1 and the destination ATM address. First, Cat1 issues an LE_ARP request to the LES to determine the ATM address to go to, to reach the destination MAC address (the MAC of host 2). While Cat 1 is waiting for this reply, it also sends frames destined for host 2 to the BUS. The BUS then forwards these frames to all members of the ELAN. If the LES knows the ATM address for the required MAC address it replies fairly quickly to the Cat1 LE_ARP. If the destination MAC address is unknown, the LE_ARP request is sent to all ELAN members that registered as proxies, to see if the MAC address is known by them. Ultimately Cat1 will get a reply to its LE_ARP request and have the ATM address to send to.

Armed with the destination ATM address, cat1 can now establish a data direct VCC to Cat2. We now have a potential problem of the frames that have been delivered by Cat1 to the BUS and passed on to Cat2 being received out of sequence with the frames being sent directly to Cat2. This is a problem as a LAN does not deliver out of sequence frames (remember the lengths we went to with spanning tree protocol to ensure a single loop free path) and we have to emulate the operation of a LAN here. The way we get around this problem in LANE is to make use of the Flush protocol.

The operation of the Flush protocol is as follows. When Cat1 establishes the data direct VCC with Cat2, it also sends a special flush packet to the BUS and stops sending the BUS any more data. When Cat2 receives the flush packet, it returns it to Cat1, which in effect tells Cat1 that all the packets it sent via the BUS to Cat2 have been received by Cat2. Now that Cat1 has a connection to Cat2 and all the frames passing through the BUS have been flushed, full speed communication between Cat1 and Cat2 can now take place.

Once packets destined for host 2 are received by Cat2, Cat2 forwards them on to host 2 as it would any other packet received from any other segment belonging to that VLAN.

Implementing LANE and using an ELAN to connect two parts of a VLAN that are physically separated across an ATM network, can be thought of as a form of VLAN trunking. LANE however, provides more functionality than trunk links like ISL that we covered earlier. Particularly LANE allows an ethernet host to communicate directly with an ATM host on the associated ELAN. ISL did not allow any host connectivity on the VLAN trunk. With ISL, the VLAN tag was kept in the frame for transmission across the trunk, which is not the case with LANE. Essentially with LANE, we are giving all the ATM devices MAC addresses that are addressable directly from the associated VLAN.

In the example above depicted in figure 3-17, host 1 will address packets to host 2 via MAC address, which Cat1 will forward on to the ELAN, knowing that the MAC address of host 2 does not reside on the same LAN as host 1. If host 1 needs to resolve a destination IP address to a destination MAC address prior to commencing communication, it will issue the usual IP ARP broadcast, which will be forwarded by Cat1 to the BUS, which will send it on to all ELAN members.

LANE has some definite advantages, but is extremely complex by comparison to other ATM technologies and is still bound by the normal IP subnet rules. In addition, because the BUS maintains a separate connection for all members of the VLAN, the scalability of LANE to large numbers of hosts per ELAN has to be in question.

Just as with CIOA, we had NHRP to make the full use of ATM’s ability to cut across subnet boundaries, there is MPOA (Multi Protocol Over ATM) that combines LANE and NHRP to take advantage of LANE, and allow it to be more efficient across the ATM network.

MPOA

Multi Protocol Over ATM is the ultimate level of complexity in deploying an ATM network that is currently available. Because this standard is still very new and quite complex, we will only illustrate specific implementations of this protocol in overview in later practicals. However, a brief theoretical overview is appropriate here.

As with nearly all the different ways of implementing ATM on a network, there are some new and unique terms that need to be defined.

· MPC, the Multi Protocol Client, this takes an equivalent place in the network to the LEC in LANE.

· MPS is the Multi Protocol Server, which is resident on the same machine as the NHS, the Next Hop Server (remember I said MPOA is basically LANE with NHRP). The MPS accepts connections from the MPC and provides ATM address resolution services. When drawing an MPOA network, the MPS can be regarded as taking the place of an NHS in an NHRP network.

· IASG is the Internetwork Address Sub Group, which is a fancy name for an IP subnet.

· Edge Device, a physical device that connects legacy networks to ATM networks. It is referred to as an edge device as it is placed on the logical edge of the ATM network cloud.

· Virtual Router, is an MPOA term that refers to router functions that are implemented in software that may be distributed throughout the ATM network. A virtual router provides path computation and packet forwarding services. In effect, a virtual router performs path computation and routing table maintenance in one location (typically an ATM switch), then uses the ATM network as the router bus to transport packets and the router ports are defined on ATM edge devices.

The main MPOA system components are illustrated in figure 3-19.

So far we have considered intra-subnet (meaning within the one subnet) ATM address resolution using ATMARP and inter-subnet address resolution (between subnets) using NHRP that allows ATM devices to communicate directly across subnet boundaries. With inter-subnet address resolution, the ATM address supplied will be the ultimate destination node, not an interim router, even if the source and destination nodes are assigned to different IP subnets. As MPOA utilizes NHRP it provides cut through capabilities and does not conform to the standard IP subnet rules. Thus the MPOPA facilities will allow a host on IASG1 in figure 3-19 to communicate with hosts on IASG2 via a connection between LS1010a and LS1010b, without having to route via LS1010c.

MPOA Operation

In MPOA operation, physical routers and other edge devices use LANE to connect to an ELAN on the network. So, an MPC will go through the normal LANE initialization procedures when it first joins an ELAN. Within the MPC however, is a layer 3 forwarding function which operates under the normal IP subnet rules. It is at the MPS (implemented in a router that contains and NHS function) where we provide the cut-through functionality for ATM connected edge devices to establish connections across subnet boundaries. Essentially, the MPS routing function (running either in a physical or virtual router) will use the NHS to obtain the cut-through ATM address to establish a direct connection from source to ultimate destination without passing through a router.

A Word on RFC1483

In all of the discussions of NHRP, LANE, MPOA and CIOA (RFC1577) we have made no mention of how the frames generated by legacy ethernet or token ring hosts are encapsulated for transmission over the cell based ATM network. In fact they all use a common encapsulation technique that is specified in RFC 1483. RFC 1483 is termed Multi-Protocol Encapsulation Over ATM, not to be confused with the ATM Forum’s specification for MPOA.

RFC 1483 actually describes two methods for connectionless protocols to communicate over an ATM network, first the LLC/SNAP encapsulation, then Virtual Connection based multiplexing. We will only consider the LLC/SNAP encapsulation as this is the one used by CIOA and ARP over ATM. This specification can be though of as a pre-cursor to CIOA, in that it is static (no ATMARP) and it also conforms to the usual IP subnet rules.

RFC 1483 calls for local virtual circuits to be associated with destination IP addresses, much the same as a DLCI does in a frame relay network. With PVC connections, these associations need to be manually applied. With SVC’s the associations are supplied by whatever signaling procedures are in place.

With LLC/SNAP encapsulation, the layer 2 DSAP, SSAP, CTRL and SNAP header fields are encapsulated along with the IP packet in to an AAL 5 PDU, which is padded to bring it up to an appropriate number of bytes that will allow the whole to be split in to cells without any leftover.

Plain RFC 1483 encapsulation by itself is not promoted for use in many places any more, as it requires lots of manual configuration and reduces the functionality of an ATM link to a fast leased line. However, this encapsulation is still important as it forms the basis for the enhanced techniques such as CIOA and LANE that followed.

Tag Switching

Tags and tag switching are another layer two feature that is promising to deliver faster switching performance to routed networks. As we shall see, although I classify tag switching as a layer two feature, it does rely upon a layer three being present and actually works by combining layer 2 and layer 3 functionality. The idea behind this technology is that there is a database of IP address destinations mapped to tag IDs, which is maintained within a switch/router that allows the switch/router to use the tags rather than resort to layer 3 mechanisms to forward the frame. By swapping tags that are small data elements and only performing a single lookup for the tag, performance in forwarding frames is enhanced. The importance of this idea to ATM is that within ATM we already have tags assigned for destinations in the form of VPI/VCI numbers. Thus it is perceived that tag switching will become very important for the future of high speed packet forwarding between IP and ATM networks. The benefits of going to all the trouble with tags, is that particularly for large internetworks, layer three scalability through hierarchy is maintained and combined with very simple and therefore speedy switching logic.

From our knowledge of legacy networks (by this I mean a classic IP network based on routers) and of VLANs, we can understand the two most important features of tag switching. First, in legacy networks, we know that an ARP table is used to re-assign source and destination MAC addresses to a packet each time it traverses a router. We also know that VLAN IDs (which can be thought of as a kind of tag) are appended to a frame as it enters the VLAN switching cloud (the cloud consists of a single switch if there are no trunks defined). These VLAN IDs are then stripped off as the packet exits the cloud on its way to its final destination. These two operations summarize how tags are used. Tags are assigned and changed at each passage through a switch/router in the tag switching cloud as the tag switch/router refers to its tag database and looks up what tag it should apply to the packet to move it on to the next hop in its journey. These tags are assigned at the entry point to the cloud and stripped off at the exit point of the tag switching cloud and are therefore transparent to end stations that are communicating over the tag switch cloud.

Tag Switching Elements

As with all technologies examined in the switching arena, tag switching has its own terms that need definition before we can look at the protocol in more detail.

The first term is a Tag Edge Router (TER). These devices are routers that participate fully in the routing mechanism of the legacy IP network that they are attached to, using routing protocols like EIGRP or OSPF to generate routing tables. The TER assigns a tag to each packet it sends in to the tag switches. This assignment of tags is made by reference to a tag database (we’ll discuss how this is generated and maintained by the TER and tag switches soon). The TER is also responsible for exchanging tag information with the tag switches it is connected to. For packets exiting the tag switch portion of the network, TERs remove tags so that the legacy devices can understand the frame.

Next, we need to examine tag switches a little further. These devices forward packets based on tag IDs rather than layer 3 network address. Performing switching based on short tags (also referred to as labels within the literature, why use one term when you can use two?) allows the switching logic to be simple and therefore economically implemented in specialized and therefore fast hardware chips. Tag switches must also maintain their own tag and route information and exchange tag information with other tag switches and TERs. In the Cisco implementation, all tag switches are in effect tag switch/routers rather than straight tag switches as they do need to maintain some routing functionality.

Finally, we need to introduce the Tag Distribution Protocol TDP. The TDP is a topology driven protocol, in that a change or discovery of network topology is required before an update to the tag assignments via the TDP is effected. TDP uses point to point communication between routers to maintain tag associations.

As seen in figure 3-20, tag switch/routers form the core of the network and assist in scaling internetworks to support very large numbers of nodes in an efficient manner. Data flows from many, many source locations can be switched using the one destination tag ID in a very efficient manner. To understand the mechanics of this, lets look at tag allocation in a bit more detail and review a theoretical example of how tags are used to get a packet from one legacy network to another via a core tag switched network.

Tag Allocation

Tag allocation mechanisms actually depend to some extent on the underlying networks that are attached to them. If for example, we use tag switching in a totally routed networks purely as a mechanism to speed up the packet forwarding process (as may be done within some of the higher end Cisco routers), tags are assigned based on destination IP address. However, if the TER uses an ATM interface to connect to an ATM switch (which forms part of the tag switched network), the value of the tag assigned can be the VPI/VCI value for the connection established between switches participating in the tag switch procedures.

Tags can be allocated by either downstream or upstream devices. In downstream allocation, every tag switch generates an incoming tag for each route in its routing table, which are then advertised to tag aware neighbors. For downstream allocation, each tag switch generates an outgoing tag for each route in its routing table, which is then advertised to all tag aware neighbors. The net effect is the same, it just requires there to be consistent operation within the tag aware network.

In the Cisco implementations of ATM switches that perform tag switching, there is a critical difference between straight ATM forum operation and tag switching using an ATM interface. Tag switching uses standard IP routing tables and TDP to generate and distribute tag information. The benefit here is that ATM switches with tag switching capability have no call setup overhead when they come to transport IP traffic over ATM. In practice this has two implications, one for the tag switch and one for the TER. First, the tag switch (for example a Lightstream 1010) will implement a standard layer 3 routing protocol as well as TDP, so that the routing table is maintained by a routing process on the 1010. Secondly, in the situation where a TER is connecting a legacy network to an ATM network, the TER will place the tag in the VPI/VCI field of the ATM cell. This enables subsequent ATM switches in the path from source to destination to uses VPI/VCI values to transport the cell. The concepts described above are best considered in the context of an example, which we will examine now.

Tag Switch Example

Let’s start by looking at tag switch operation in the steady state, before we consider how the Tag Distribution Protocol works in practice. Figure 3-21 shows a simple network that could form part of a tag switch core of a large internetwork. A regular routing table will list the destination subnet, the IP address of the next hop router this is reachable by, and the interface through which to reach the next hop router, three pieces of information that we can refer to as the destination triple. On routers that are tag switch enabled, an additional entry is present in the routing table, the tag, which represents this destination triple.

So, for figure 3-21 we see a normal IP packet coming in to interface 1, destined for 10.1.1.1, which happens to be reachable via the core tag switch section of the internetwork. Assuming no subnet masks, the router will examine its routing table and see that network 10.0.0.0 is reachable via the router with address 174.8.3.2, which is on the same segment as the router’s interface 4. Given that this router is performing tag switching the router will append tag 20 (this is an arbitrary figure chosen for ilustrative purposes) as the identifier for future forwarding of the packet. Once the packet enters the tag switching core, it has the tag appended and will use that tag exclusively for forwarding decisions. After the tag switching core has used the tag to switch the packet to the desired destination, the tag is taken off the packet for delivery to the host on a non-tag edge network.

The next figure 3-22 illustrates what happens when this packet reaches the next hop router in the tag switch core.

In this example, the packet destined for 10.1.1.1 that was pre-pended with the tag 20 when exiting interface 4 is arriving at interface 1 (presumably this interface has IP address 174.8.3.2 as indicated by the routing table in figure 3-21). The first thing that happens when this packet arrives at interface 1 is that the switching table is examined and the router sees that an incoming tag of 20 should be switched for an outgoing tag of 40 and the packet forwarded out interface 2. As stated previously, this switching of tags can be referred to as label swapping. In fact, the IEEE are looking to standardize Cisco’s proprietary tag switching under the name of Multi-Protocol Label Swapping (MPLS).

Once the new tag has been applied at the incoming interface, the router uses the new tag to switch the packet out interface 2, without resorting to routing table lookups. The speed benefit this delivers is that all the time consuming table lookups have been performed prior to a packet needing to be switched and the results stored in a switching table that uses much smaller data elements than the routing table.

It should be noted that tags are of local significance only and therefore the same tag value may appear on each router interface. This could give rise to the situation when a packet has the same tag value both in-bound and out-bound from a router. For the sake of clarity, this was not done in the example just discussed. Having discussed the case where all the destinations already have tags applied, let’s look at an example of the Tag Distribution Protocol to see how tag associations are initially made.

TDP is, as we have stated before, a topology driven protocol. Tag values are assigned when the topology is first discovered or when it changes. This differs to some other schemes that only assign labels when traffic appears for a given destination (like MPOA). So, lets use the network in figure 3-21 as the base for our examination of how tags are applied to a newly discovered destination network. Supposing the router in figure 3-21 gets an IGRP routing update notifying it of network 120.4.0.0, a previously unknown destination network. In this instance, 120.4.0.0 will get entered in to the routing table with a blank tag entry. TDP is now tasked with obtaining a tag for this new destination. Typically, this router will use TDP to ask the router that informed it of the new network to assign a tag value for the new destination network. Note that as tags only have local significance, it does not matter if a tag value is supplied that is already in use on another interface.

We can see by this example that tags are only present within packets travelling in the tag switch core, change on a hop by hop basis and are only of significance to the interfaces they are received upon. I’ll conclude this discussion of tag switching by reviewing some of the practical aspects of tag switch implementation on an ATM core network.

Tag Switching in an ATM Core

The conclusion of the discussion on tag switching brings together the points we have raised in the general discussion and applies them to what will most likely be one of the most common network configurations, classic IP edge networks interconnected via an ATM tag switch core. In this configuration, we need to define in a little more detail what the tag edge router and the tag switch router consist of.

In figure 3-23, tag edge routers are implemented as regular routers with an ATM interface. On the ethernet side of these TERs, there will be connections to other routers, and route information will be exchanged in the normal way using routing protocols like IGRP. On the ATM interface, both regular routing updates and TDP information are exchanged with the tag switch routers. From the TER point of view, it views a tag switch router as just another router that it will exchange routing information with.

The tag switch router also has its own routing table and runs a routing protocol and therefore each ATM switch will require a routing module. Between a TER and a tag switch router, the tag assignment for each destination network will be the VPI/VCI values for each destination location. This is an important point, it means that for a TER to assign tags for all entries in its routing table, it must have a corresponding VCC to each possible destination for there to be VPI/VCI values present. The implication is that each TER will create a VCC to all the other TERs in the network, creating a potential scaling problem as the numbers of TERs per network grows.

Summary

This chapter focussed on the theory of ATM and specifically how the theory of IP over ATM. We discussed the UNI and NNI interfaces in more depth and covered how the Private NNI protocol provided services similar to link state routing protocols that are used in classic IP networks. We discussed how ATM initiates connections for both point to point and point to multi point connections.

ATM like IP requires effective address hierarchy to maintain efficient routing, by keeping topology database size to a minimum. ATM addresses were discussed as having sections relating to domain membership and a section that identifies the node within a specific domain. The level indicator was introduced as an ATM version of the IP subnet mask.

The first method of integrating ATM and IP was presented as Classic IP Over ATM, which reduces ATM to a fast link layer protocol. CIOA uses an ATM network to establish point to point connections between devices that are configured for the same IP subnet. For a CIOA configured device to access a host on another IP subnet, the connection needs to be routed through an intermediate router.

NHRP, the Next Hop Resolution Protocol was presented as a way of improving efficiency over straight CIOA. NHRP allows an ATM connection to be established directly between the two end points that need to communicate whether they are on the same IP subnet or not. NHRP is therefore using the ATM network in a more efficient manner.

LANE was discussed as a way to extend local area networks over an ATM cloud. LANE provides the means to extend VLANS over what are termed Emulated LANs in the ATM environment. The LANE components of LEC, LECS and LES/BUS were described and their role in enabling legacy LAN end stations to communicate with ATM hosts and LAN end stations the other side of an ATM cloud was also discussed. The complexity of LANE is essential in emulating the broadcast capability of a LAN in what is the connection oriented medium of an ATM network. LANE provides the trunking capability of ATM when operating in a VLAN environment.

Just as CIOA was bound by the normal IP subnet rules, so is LANE. MPOA was introduced as a means of bringing NHRP functionality to LANE, but we left MPOA aside as it is still early in its deployment.

RFC 1483 was identified as an important document that specifies how all the ATM methods above encapsulate IP traffic in an adaptation layer and segment that encapsulation in to cells.

ATM operation with frame relay networks was discussed and the various methods of interworking these two technologies covered. Issues of inter-operation between frame relay congestion notifications and frame loss procedures and the ATM equivalents was discussed.

A discussion of the two frame based accesses to ATM was also delivered, those being FUNI and DXI. These methods allow variable length frames to be input to ATM DSUs that then generate ATM cells for transmittal on the ATM network.

Finally we covered tag switching and its application to both standard IP networks and those that incorporate an ATM core.

Backward
Chapter: | 1 | 2 | 3


With any suggestions or questions please feel free to contact us