Multiplex communications – Pathfinding or routing – Switching a message which includes an address header
Reexamination Certificate
1998-11-09
2001-02-06
Pham, Chi H. (Department: 2731)
Multiplex communications
Pathfinding or routing
Switching a message which includes an address header
C370S416000, C709S240000, C710S240000
Reexamination Certificate
active
06185221
ABSTRACT:
TECHNICAL FIELD
The invention relates generally to the scheduling of packets in a high-bandwidth input-buffered multipoint switch, for instance as used in gigabit ethernet networks. More particularly, the invention relates to a non-blocking scheduler that utilizes a parallel multilevel arbitration method.
BACKGROUND OF THE INVENTION
Networks are widely used to transfer voice, video, and data between various network devices such as telephones, televisions, and computers. Data transmitted through a network is typically segmented into packets, and under some network protocols data is segmented into fixed-length cells. For example, Asynchronous Transfer Mode (ATM) protocol requires 53-byte cells, with 5 bytes of each cell designated for a header and 48 bytes of each cell designated for payload. Other network protocols, such as ethernet or Internet protocol, carry data in variable-size packets.
Switches are integral parts of most networks. Switches receive packets from input channels and direct the packets to the appropriate output channels of the switch. Typical switches have three components: a physical switch fabric to provide the connections from input channels to output channels, a scheduling mechanism to direct traffic when multiple packets arrive on different input channels destined for the same output channel, and a buffering or queuing mechanism at the switch input or output to accommodate traffic fluctuations without undue packet loss.
FIG. 1
is a diagram of a prior art switch
10
that has four input channels
12
,
14
,
16
and
18
and four output channels
20
,
22
,
24
and
26
. The switch has a serial input queue
28
,
30
,
32
, and
36
for each input channel, a crossbar physical switch
38
, and a crossbar scheduler
40
. The crossbar scheduler receives a signal, referred to as a request, from an input queue. The request dictates the output channel or channels that will receive the queued packet. The scheduler arbitrates between competing requests and sends a signal, referred to as a grant, back to the input buffers that have been selected to deliver a packet.
In switches such as the switch
10
described in reference to
FIG. 1
, each input queue
28
-
36
provides requests to the scheduler
40
, one at a time, on a first-in-first-out (FIFO) basis and the scheduler arbitrates among the four requests received from the four input queues, with a goal of maximizing utilization of the input channels
12
-
18
and output channels
20
-
26
of the switch. As a grant is issued to a particular input channel to access a target output channel or channels, a new request is accessible by the scheduler in place of the granted request.
A problem known as head-of-line (HOL) blocking is created when one of the requests at the head of a queue line is a request for an output channel that is not available. HOL blocking is common when a multicast request is made, because there is a lower probability that all of the output channels for the multicast request will be available immediately. When a request from a particular input channel is forced to wait until all output channels are available, all of the packets associated with the particular input channel are also forced to wait, thereby slowing the transfer of data from that input channel.
As one remedy for solving HOL blocking problems, parallel input queues have been implemented. Parallel input queues provide a separate FIFO queue for each output channel of the switch, with each queue providing a corresponding request to the scheduler. Referring to
FIG. 2
, an N input channel by N output channel switch requires N input queues
46
for each input channel, for a total of N
2
input queues. With an N
2
scaling factor, the number of input queues connected to the crossbar scheduler
50
may be very high. For example, in a 16×16 switch, 256 separate queues are required. In spite of the added complexity, the advantage that the parallel design provides is that, with respect to any one of the input channels, a series of requests for available output channels is not held up by a single request for in-use output channels.
A variety of arbitration techniques can be used with parallel input channels to provide an efficient throughput through a switch. For example, maximum matching algorithms are designed in an attempt to assign output channels to input channels in such a way that a maximum number of transfers occur simultaneously. However, under heavy load conditions, maximum matching algorithms can prevent some requests from being granted, creating a new blocking problem. For example, referring to
FIG. 3
, input channel
1
is represented as requesting a transfer of cells from its output-distributed queue
54
to output channel
1
only, while input channel
2
is requesting a transfer of cells from its output-distributed queue
56
to output channels
1
and
2
. Under a maximum matching approach, input channel
1
transmits cells to output channel
1
and input channel
2
transmits cells to output channel
2
. However, input channel
2
will be blocked from transferring cells destined for output channel
1
, since this would require the cell transfer from input channel
1
to output channel
1
to stop, and as a result, only output channel
1
would be utilized. As shown in
FIG. 4
, sending cells from input channel
2
to output channel
1
causes input channel
1
and output channel
2
to remain idle and does not achieve maximum matching.
Arbitration methods developed to optimize performance of high speed switches utilizing parallel input queues are disclosed in U.S. Pat. No. 5,500,858, entitled “Method and Apparatus for Switching Cells in an Input-Queued Switch,” issued to McKeown and in U.S. Pat. No. 5,517,495, entitled “Fair Prioritized Scheduling in an Input-Buffered Switch,” issued to Lund et al. Although these arbitration approaches are effective for their intended purpose, they both require that an N×N switch have N
2
distinct FIFO input queues. Since there are N
2
distinct FIFO input queues, there will also be N
2
requests delivered to the scheduler. As the number of input and output channels increases, the complexity of providing N
2
input queues and sending N
2
requests to the scheduler becomes costly and difficult to implement.
In addition to the problem of added complexity, the output-distributed queue architecture does not easily support multicast requests, which are more common in network protocols such as ethernet than in network protocols such as ATM. For example, in order to utilize the output-distributed architecture of
FIG. 2
to satisfy a multicast request, the cell that is to be multicasted must either be replicated into all of the output channel queues that are indicated by the request or a separate multicast queue must be established in addition to the N
2
queues already present.
A third concern is that output distributed queue architectures do not readily enable a switch to provide quality of service (QoS) guarantees for packet transfers across a switch. For example, in an enterprise network, there are many different packet flows having different sources, destinations, and packet protocols. Because of user-specific needs, some packet flows are deemed more critical to the enterprise than other packet flows and can be prioritized accordingly. For example, a packet flow related to a mission-critical application (i.e., SAP, People Soft, Baan, or Custom Client/Server applications) can be assigned a different QoS priority than HTTP-based Internet traffic. A QoS priority can then be related to specific forwarding rules or transfer rate limits across the network switch to provide different levels of service. In the queue arrangements of
FIGS. 1 and 2
, packets, or their related requests, are not differentiated by QoS priority and, as a result, a high priority packet is forwarded through the switch in the same manner as a lower priority packet. In, for example, an enterprise network environment, non-prioritized queue arrangements may mean that packets carrying time-critical video teleconferenc
Cabletron Systems Inc.
Law Offices of Mark A. Wilson
Nguyen Steven
Pham Chi H.
Wilson Mark A.
LandOfFree
Method and apparatus for fair and efficient scheduling of... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Method and apparatus for fair and efficient scheduling of..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method and apparatus for fair and efficient scheduling of... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-2606710