Scheduling techniques for data cells in a data switch

Multiplex communications – Pathfinding or routing – Switching a message which includes an address header

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C370S395430

Reexamination Certificate

active

06229812

ABSTRACT:

BACKGROUND OF THE INVENTION
The present invention relates to digital data networks. More particularly, the present invention relates to apparatus and methods for improving communication among devices that are coupled to Asynchronous Transfer Mode (ATM) digital data networks.
Asynchronous Transfer Mode (ATM) is an emerging technology in the fields of telecommunication and computer networking. ATM permits different types of digital information (e.g., computer data, voice, video, and the like) to intermix and transmit over the same physical medium (i.e., copper wires, fiber optics, wireless transmission medium, and the like). ATM works well with ata networks, e.g., the Internet, wherein digital data from a plurality of communication devices such as video cameras, telephones, television sets, facsimile machines, computers, printers, and the like, may be exchanged.
To facilitate discussion, prior art
FIG. 1
illustrate a data network
100
, including an ATM switch
102
and a plurality of communication devices
104
,
106
,
108
,
110
, and
112
. ATM switch
102
may represent a digital switch for coupling, for either bidirectional or unidirectional transmission, two or more of the communication devices together for communication purpose and may represent a data network such as a local area network (LAN), a wide area network (WAN), or the global data network popularly known as the Internet. Each of communication devices
104
,
106
,
108
,
110
, and
112
is coupled to ATM switch
102
via a respective ATM port
104
(
p
),
106
(
p
),
108
(
p
),
110
(
p
), and
112
(
p
). Each ATM port may include circuitry to translate data from its communication device into an ATM data format for transmission via ATM switch
102
, and to translate ATM data transmitted via ATM switch
102
into a data format compatible with that communication device.
Irrespective of the source, data is transformed into an ATM data format prior to being transmitted via an ATM-enabled network. As is well known, typical ATM data cell includes a header portion and a data portion. Cell header portion may include information regarding the type of information being encapsulated in the ATM data cell the destination for that information, and the like. Cell data portion typically includes the information being sent. By standardizing the format of the ATM cells, information from different communication devices may be readily intermixed and transmitted irrespective of its original format.
In the implementation of ATM technology in a data network, the challenge has been to improve the efficiency with which ATM switch
102
handles multiple simultaneous connections among the multiple communication devices. For peak efficiency, it is generally desirable to have an ATM switch that can handle a very large number of simultaneous connections while switching ATM data cells with minimal delay and maximum data integrity. Unfortunately, the high bandwidth demanded by such a design generally results in a prohibitively expensive ATM switch.
In the prior art, many ATM switch architectures have been proposed in the attempt to balance between switching capabilities and cost. In the FIGS. that follow, a convention has been adopted for ease of illustration and understanding. It is assumed herein that ATM ports on the left side of a depicted ATM switch represents ATM input ports. Contrarily, ATM ports illustrated on the right side of a depicted ATM switch represent ATM output ports. In reality, most ATM ports are bidirectional and may be disposed at any location relative to the ATM switch. Furthermore, although only a few ATM ports are shown herein, the number of ATM ports coupled to a given ATM switch is theoretically unlimited. Accordingly, the convention is employed to facilitate discussion only and is not intended to be limiting in any way.
FIG. 2A
is a prior art illustration depicting an ATM switch architecture known as an input buffer switch. Input buffer switch
200
of
FIG. 2
typically includes a switch matrix
202
, which may represent a memory-less switching matrix for coupling data paths from one of input buffers
104
(
q
),
108
(
q
), and
110
(
q
) to one of ATM output ports
10
(
p
) and
112
(
p
). Input buffers
104
(
q
),
108
(
q
), and
110
(
q
) represent the memory structures for temporary buffering ATM data cells from respective ATM input ports
104
(
p
),
106
(
p
), and
108
(
p
). ATM ports
104
(
p
)-
112
(
p
) were discussed in connection with
FIG. 1
above.
To reduce implementation cost, switch matrix
202
is typically a low bandwidth switch and can typically handle only a single data connection to a given output port at any given point in time. Consequently, when both ATM input ports
104
(
p
) and
108
(
p
) need to be coupled to ATM output port
110
(
p
), switch matrix
202
typically needs to arbitrate according to some predefined arbitration scheme to decide which of the two data paths,
104
(
p
)
110
(
p
) or
108
(
p
)/
110
(
p
), may be switched first. For discussion purposes, assume that switch matrix
202
is arbitrated to ATM input port
104
(
p
), thereby coupling it to ATM output port
10
(
p
). In this case, ATM cells are transmitted from ATM input ports
104
(
p
) to ATM output port
10
((
p
). ATM cells at ATM input port
108
(
p
) are buffered in input buffer
108
(
q
) while waiting for ATM port input
108
(
p
) to be coupled to ATM output port
110
(
a
). The buffered ATM cells are shown representatively in input buffer
108
(
q
) as cells
204
and
206
.
It has been found that the performance of input buffer switch
200
suffers from a phenomenon called “end-of-the-line blocking.” To explain this phenomenon, assume that ATM cell
204
needs to be delivered to ATM output port
110
(
p
) and therefore must wait until switch matrix
202
can couple ATM input port
108
(
p
) with ATM output port
110
(
p
). ATM cell
206
, however, is destined for ATM output port
112
(
p
). Nevertheless, ATM cell
206
is blocked by ATM cell
204
, and must also wait until ATM cell
204
is first delivered to ATM output port
110
(
p
). ATM cell
206
must wait even though it is not destined for ATM output port
110
(
p
). Head-of-the-line blocking occurs when data buffering is performed on a per-input port basis, i.e., ATM cells from a given input port are queued together prior to being switched irrespective of the final destinations of the individual ATM cells. A high degree of head-of-the-line blocking is detrimental to the performance of input buffer switch
202
since it limits the throughput of ATM cells through the ATM switch.
Output buffer switch
230
of
FIG. 2B
represents another prior ATM switch architecture in which performance is maximized, albeit at a high cost. Output buffer switch
230
has output buffers
110
(
q
) and
112
(
q
) coupled to respective ATM output ports
110
(
p
) and
112
(
p
) for buffering the ATM cells output by switch matrix
232
. For maximum performance, switch matrix
232
may represent a high bandwidth switch matrix capable of coupling multiple input ports to a single output port. For example, switch matrix
232
may couple ATM data from all three ATM input ports
104
(
p
),
106
(
p
), and
108
(
p
) to output buffer
110
(
q
) and output port
110
((
p
). In other words, switch matrix
232
is capable of making N connections simultaneously to a single output port, where N represents the number of ATM input ports (i.e.,
3
in the example of FIG.
2
B). Compared to switch matrix
202
of
FIG. 2A
, switch matrix
232
of FIG.
2
B typically requires N times the bandwidth to handle N simultaneous connections.
Output buffers, as mentioned, buffer ATM cells output by switch matrix
232
. Since an output buffer, e.g., output buffer
110
(
q
), may accept data from multiple different sources simultaneously via switch matrix
232
, it is typically provided with N times the bandwidth of analogous input queue, e.g., input queue
104
(
q
) of FIG.
2
A. Although output buffer switch
230
suffers no performance degradation due to head-of-the-line blocking, the requir

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Scheduling techniques for data cells in a data switch does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Scheduling techniques for data cells in a data switch, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Scheduling techniques for data cells in a data switch will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2519977

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.