Enhanced flow control in ATM edge switches

Multiplex communications – Communication techniques for information carried in plural... – Adaptive

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C370S395210, C370S395410, C370S230000, C370S235000, C709S229000, C709S232000, C710S029000, C710S060000

Reexamination Certificate

active

06633585

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Technical Field
The present invention relates in general to an improved method and system for managing communications networks. In particular, the present invention relates to a method and system within a telecommunications network for efficiently adapting a pause interval flow control mechanism to a rate-based flow control mechanism. More particularly, the present invention relates to a method and system for allocating bandwidth among a plurality of sessions that utilize both pause interval and rate-based flow control.
2. Description of the Related Art
Traditional Ethernet utilizes the carrier sense multiple access with collision detection (CSMA/CD) network protocol to insure that there is no more than a single station transmitting on a segment at any instant. This contention resolution algorithm has inherent flow control capability. Therefore, in a traditional Ethernet network, congestion can typically happen only at bridges. Conventionally, slow network speeds and flow control provided by higher layer protocols such as Transmission Control Protocol/Internet Protocol (TCP/IP) have meant that flow control at switches was not an issue. However, the recent increase in network speeds up to gigabits per second, coupled with full-duplex transmission capabilities, has mandated the need for a flow control mechanism. The goals of such flow control include efficiency and fairness. To this end, The IEEE 802.3 working group has standardized a mechanism for asymmetric flow control for full duplex Ethernet LANs. The draft standard IEEE 802.3x defines a hop flow control method wherein a switch experiencing congestion delivers a frame to its upstream switch forcing the upstream switch to pause all transmission from a designated port for a time period specified in the frame. The draft standard IEEE 802.3x, requires mandatory interpretation of the mechanisms defined therein for Gigabit Ethernet (1000 Mbs) and 100 Mbs Ethernet while being optional for slower networks with 10 Mbs links.
The standard IEEE 802.3x specifies a pause operation as follows. The purpose of a pause operation is to inhibit transmission of data frames from another station for a period of time. A Media Access Control (MAC) station which wants to inhibit transmission from another station generates a 64-byte pause control frame containing several information fields. One such field is a 6-byte (48-bit) MAC address of the station requesting the pause. Another is a globally assigned 48-bit multicast address assigned by the IEEE for the flow control function. Other fields include a type/length field indicating that the frame is a control frame, a PAUSE opcode field, a pause interval (length of time of the requested pause) field, and a PAD field.
The timespan of a pause interval is measured and assigned in units of time slots. Currently, a time slot is defined as the amount of time required to transmit a 64 byte packet on the given medium. For example, the time slot for 10 Mbs Ethernet is 51.2 microseconds.
A bridge which conforms to IEEE 802.1D standard (ISO/IEC Final CD 15802-3 Information technology—Telecommunications and information exchange between systems—Local and metropolitan area networks—Common specifications—part 3: Media Access Control (MAC) bridges, IEEE P802.1D/D15) will recognize frames with the well-known multicast address and it will not forward such frames.
When a station receives a MAC control frame having a destination address equal to the reserved multicast address, or with the destination address equal to the station's own unique MAC address, and with the PAUSE opcode, the compliant station is required to start a timer for the value indicated by the pause time field in the frame. The new pause time will overwrite any pause value that may be in progress, whether shorter or longer. Note that although the receiving station is required to process and not forward any such frame that includes either the reserved multicast address or its own address, generating a pause frame with any destination address other than the reserved multicast address violates the current standard. Having a station react to both addresses was done to allow future expansion of the flow control protocol to support half duplex links or end-to-end flow control.
The notion of fairness in allocation of network resources is very difficult to quantify and may be defined in a number of ways. One of the simplest definitions of fair allocation is that which provides equal distribution of network resources among all sessions. Every network link has a capacity. Every session has a source node, a destination node, a single path connecting the source node to the destination node, and an offered send rate. Simply put, the goal of max-min fairness is to arrive at send rates for all sessions so that no send rate can be increased without causing some other send rate to decrease due to resulting congestion in some link.
Consider, for instance, the scenario depicted in FIG.
1
. As illustrated in
FIG. 1
, the bottleneck link for sessions
110
,
112
, and
114
is link
116
. Since link
116
has a capacity of 1, each of sessions
110
,
112
, and
114
is expected to be provided a rate of ⅓. The second link
118
also has a capacity of 1 and is shared by sessions
114
and
122
. Since session
114
has already been limited to a flow rate of ⅓, session
122
may be allocated a rate of ⅔. Finally, the capacity of the third link
120
is 2 units of bandwidth. Therefore link
120
will have 1 unit of unutilized bandwidth due to the upstream bottlenecks of sessions
114
and
122
.
The example in
FIG. 1
illustrates the concept of max-min fairness in which the goal is to maximize the minimum capacity allocated to any session. For a formal treatment of max-min fairness and for an algebraic algorithm for computing the max-min fair share, reference is may be made to D. Bertsekas and R. Gallager,
Data Networks
, Prentice-Hall, Englewood Cliffs, N.J., 1992, incorporated herein by reference. The algebraic algorithm explained therein requires global, simultaneous knowledge of all session demands and all link capacities.
A problematic consequence of max-min fairness appears in FIG.
2
. The configuration illustrated in
FIG. 2
is adapted from M. Molle and G. Watson, 100BaseT/IEEE 802.12/Packet Switching, IEEE Communications Magazine, August 1996. Here a switch, SW
2
204
, experiences congestion on its output port into a link
210
of capacity
2
. The response of SW
2
204
is to require traffic in both input links,
206
and
212
, which feed link
210
to reduce flows to 1. Thus both Session
0
218
and Session
1
220
are required to reduce rates to 1, as well as Session
2
222
. It should be noted, however, that optimal max-min allocation would allow rates of 1 for Session
0
218
and Session
2
222
, but 9 for Session
1
220
.
Asynchronous Transfer Mode (ATM) is a rapidly developing network technology capable of providing real-time transmission of data, video, voice traffic. ATM is connection-oriented and utilizes cell-switching technology that offers high speed and low latency for the support of data, voice, and video traffic. ATM provides for the automatic and guaranteed assignment of bandwidth to meet the specific needs of applications, making it ideally suited for supporting multimedia as well as for interconnecting local area networks (LANs).
ATM serves a broad range of applications very efficiently by allowing an appropriate Quality of Service (QoS) to be specified for differing applications. Various service categories have been developed to help characterize network traffic including: Constant Bit Rate (CBR), Variable Bit Rate (VBR), Unspecified Bit Rate (UBR), and Available Bit Rate (ABR).
Standardized mechanisms for rate-based flow control have been developed for the ABR class of service. ABR is a best effort service class for non-real-time applications such as file transfer and e-mail. An amount of bandwidth termed minimum cell rate (MCR) is reserved for each session. Each session then gets an additio

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Enhanced flow control in ATM edge switches does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Enhanced flow control in ATM edge switches, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Enhanced flow control in ATM edge switches will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3138112

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.