Apparatus and method for sharing memory using a single ring...

Multiplex communications – Pathfinding or routing – Store and forward

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C370S376000, C370S473000

Reexamination Certificate

active

06771654

ABSTRACT:

FIELD OF THE INVENTION
The present invention relates to computer network interfacing and switching, and more particularly, to an apparatus and method for efficiently storing and forwarding data frames within a “daisy chain” configuration of multiple multiport network switches.
BACKGROUND ART
A multiport network switch in a packet switching network is coupled to stations on the network through its multiple ports. Data sent by one station on the network to one or more other stations on the network are sent through the network switch. The data is provided to the network switch over a shared access medium according to, for example, an Ethernet protocol (IEEE Std. 802.3). The network switch, which receives a data frame at one of its multiple ports, determines a destination network station for the data frame from information contained in the data frame header. Subsequently, the network switch transmits the data from the port or ports connected to the destination network station or stations.
A single Ethernet network switch may have a number of 10/100 Mb/s ports, equaling, for example, 12 ports. The number of end stations connected to the single network switch is limited by the number of ports (i.e., port density) of the network switch. However, users of networking devices demand flexibility and scalability in their networks. To address this need, modular architectures have been developed that enable cascading of identical networking devices or network switch modules. By cascading these devices (or components) in a loop, port density can be readily increased without redesign or development of costly interfaces.
Unfortunately, as the number of cascaded switches increases, so does the system latency (i.e., the aggregate processing delay of the switches). System latency is attributable, in part, to the manner in which the switches store and retrieve the data frames in memory. One traditional memory architecture employs individual, local memories for each cascaded switch, as shown in FIG.
1
. In this example, three multiport switches
12
a
12
b,
12
c,
are cascaded together to permit the exchange of data frames received by any one of the switches and subsequent forwarding of the data frames out of a different multiport switch. Each of these switches
12
a
12
b,
and
12
c
has a memory interface,
44
a,
44
b,
and
44
c,
respectively. These memory interfaces
44
a,
44
b,
and
44
c
enable switches
12
a
12
b,
and
12
c
to access their respective memories
601
a,
601
b,
and
601
c
to write and read the data frames.
For explanation purposes, it is assumed that a data frame is received at a port (i.e., receive port) on switch
12
a
and that the data frame destination is a node attached to a port on a different switch
12
c.
Switch
12
a
first stores the received data frame in memory
601
a,
and then determines whether to forward the received data frame out of its own port or send it to the next switch in sequence. Because the data frame is not destined to any port of switch
12
a,
the data frame is retrieved from memory
601
a
and sent to the next switch
12
b
via the cascade port (i.e., the port to which the neighboring switches are connected) of switch
12
a.
Upon receiving the data frame, switch
12
b
stores the data frame in memory
601
b.
Switch
12
b
then examines the data frame and determines that it should be forwarded to switch
12
c.
Accordingly, switch
12
b
forwards the data frame to switch
12
c
by reading the stored received data frame from memory
601
b
and sending the data frame out its cascade port. When the data frame arrives at switch
12
c.
switch
12
c
writes the data frame into its memory
601
c,
in similar fashion as the other switches
12
a
and
12
b.
At this point, however, switch
12
c
determines that the data frame should be forwarded out one of its ports, which is connected to the destination node. Hence, switch
12
c
reads the stored data frame and forwards it out the appropriate port. As evident by this example, the data frame, as it is transferred from switch to switch is stored and read numerous times into the memories of the respective switches. The series of write and read operations disadvantageously imposes costly delay in the switching system.
To address this latency problem, one conventional approach is to employ a common memory among the various switches.
FIG. 2
illustrates such a system in which switches
12
a,
12
b,
and
12
c
share memory
701
via memory interfaces
44
a,
44
b,
and
44
c,
respectively. Under this approach, the interfaces
44
a,
44
b,
and
44
c
are required to have a wider data bus to maintain the speed of read and write accesses as compared to the individual memory arrangement of FIG.
8
. For example, the bus width of the memory interfaces
44
a,
44
b,
and
44
c
may need to increase to 128 bits. The main drawback with this common memory implementation is that the increase in memory bandwidth also results in a proportionate increase in the pin count. An increase in the number of pins disadvantageously requires more area on the circuit board, resulting in greater package cost.
SUMMARY OF THE INVENTION
There is a need for an arrangement to connect two or more multiport network switches together to increase port density, without increasing the memory bandwidth and a corresponding proportionate increase in pin count.
This and other needs are met by embodiments of the present invention which provides a multiport network switch arrangement having a plurality of multiport network switches each having a corresponding local buffer memory. The network switches in the arrangement are configured to segment each data frame received at an input port equal unitary segments so that data frames may be divided and stored equally among the local buffer memories, thus, in essence, creating a “shared memory” arrangement.
One aspect of the present invention provides a network switch arrangement having a plurality of multiport network switches. Included in the arrangement is a plurality of local buffer memories, each of the plurality of local buffer memories being coupled with a corresponding multiport network switch. A unidirectional data bus ring is connected to each of the plurality of network switches such that the switches are connected to each other in a concatenated fashion by the data bus ring. In this arrangement, a received data frame is segmented into equal length segments by a particular multiport network switch receiving the data frame. The particular switch transmits at least one of the equal length segments to at least one other multiport network switch over the unidirectional data bus ring for storage in the local buffer memory of the at least one other multiport network switch.
Transmitting at least some of the data frame segments to other network switches allows the storage of data frame to be distributed equally over all the local buffer memories. Hence, the bandwidth required for each local buffer memory is minimized.
Another aspect of the invention provides a method for receiving and transmitting data frames in a network switch arrangement. The method includes receiving a data frame at a first switch of a plurality of switches. The data frame is segmented as it is being received into a plurality of equal unitary segments. A first segment of the plurality of equal unitary segments is held in the first switch of the plurality of switches during a first time slot. A second segment is transferred to a second switch of the plurality of switches via a unidirectional bus ring connecting the plurality of switches and the second segment is then held in the second switch during a second time slot. During a third time slot, the second segment is transferred to a third switch of the plurality of switches via the bus ring, a third segment is transferred to the second switch via the bus ring and held in the first switch during the third time slot. At the end of the third time slot, each of the first, second and third segments is then stored in a respective memory corresponding to each of the plural

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Apparatus and method for sharing memory using a single ring... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Apparatus and method for sharing memory using a single ring..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Apparatus and method for sharing memory using a single ring... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3293800

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.