Multiplex communications – Pathfinding or routing – Store and forward
Reexamination Certificate
1999-09-23
2004-04-20
Vanderpuye, Kenneth (Department: 2661)
Multiplex communications
Pathfinding or routing
Store and forward
C370S412000
Reexamination Certificate
active
06724769
ABSTRACT:
FIELD OF THE INVENTION
The present invention relates to computer network interfacing and switching, and more particularly, to an apparatus and method for cascading multiple multiport network switches to increase the number of ports in a network switching arrangement.
BACKGROUND ART
A multiport network switch in a packet switching network is coupled to stations on the network through its multiple ports. Data sent by one station on a network to one or more other stations on the network are sent through the network switch. The data is provided to the network switch over a shared access medium according to, for example, an Ethernet protocol. The network switch, which receives the data at one of its multiple ports, determines the destination of the data frame from the data frame header. The network switch then transmits the data from the appropriate port to which the destination network station is connected.
A single Ethernet network switch may have a number of 10/100 Mb/s ports, equaling, for example, 12 ports. The number of end stations connected to the single network switch is limited by the number of ports (i.e., port density) of the network switch. However, today's users of networking devices demand flexibility and scalability without such constraints. To address this need, manufacturers have developed modular architectures that enable cascading of identical networking devices or network switch modules. By cascading these equipment (or components) in a loop, port density can be readily increased without redesign or development of costly interfaces.
Unfortunately, as the number of cascaded switches increases, so does the system latency (i.e., the aggregate processing delay of the switches). This system latency is attributable in part by the manner in which the switches store and retrieve the data frames in memory. One traditional memory architecture employs individual, local memories for each cascaded switch, as shown in FIG.
1
. In this example, three multiport switches
12
a
,
12
b
, and
12
c
are cascaded together to permit the exchange of data frames received by any one of the switches and subsequent forwarding of the data frames out of a different multiport switch. These switches
12
a
,
12
b
, and
12
c
have a memory interface, e.g.,
44
a
,
44
b
, and
44
c
, respectively. These memory interfaces
44
a
,
44
b
, and
44
c
enable switches
12
a
,
12
b
, and
12
c
to access their respective memories
601
a
,
601
b
, and
601
c
to write and read the data frames.
For purposes of explanation, it is assumed that a data frame is received a port (i.e., receive port) on switch
12
a
and that the data frame is destined for a node attached to a port on a different switch
12
c
. Switch
12
a
first stores the received data frame in memory
600
a
, and then determines whether to forward the received data frame out of its own port or send it to the next switch in sequence. Because the data frame is not destined to any port of switch
12
a
, the data frame is retrieved from memory
600
a
and sent to the next switch
12
b
via switch
12
a
's cascade port (i.e., the port to which the neighboring switches is connected). Upon receiving the data frame, switch
12
b
stores the data frame in memory
600
b
. Next, switch
12
b
examines the data frame and determines that it should be forwarded to switch
12
c
. Switch
12
b
forwards the data frame to switch
12
c
by reading the stored received data frame from memory
600
b
and sending it out its cascade port. When the data frame arrives at switch
12
c
, switch
12
c
writes the data frame into its memory
600
c
, in similar fashion as the other switches
12
a
and
12
b
. At this point, however, switch
12
c
determines that the data frame should be forwarded out one of its ports, which is connected to the destination node. Accordingly, switch
12
c
reads the stored data frame and forwards it out the appropriate port. As evident by this example, the data frame, as it is transferred from switch to switch is stored and read numerous times into the memories of the respective switches. The series of write and read operations impose cost delay in the switching system.
Hence, the delay in the switching system may cause the switch to be unable to process data packets fast enough relative to the network traffic, creating congestion conditions. In other words, the switch is no longer a non-blocking switch.
To address this latency problem, one proposed solution is to employ a common memory among the various switches.
FIG. 2
illustrates such a system in which switches
12
a
,
12
b
, and
12
c
shared memory
701
via memory interfaces
44
a
,
44
b
, and
44
c
, respectively. Under this approach, each of the interfaces
44
a
,
44
b
, and
44
c
are required to have a wider data bus to maintain the speed of read and write accesses as compared to the individual memory arrangement of FIG.
8
. For example, the bus width of the memory interfaces
44
a
,
44
b
, and
44
c
may need to increase to 128 bits. The main drawback with this common memory implementation is that the increase in memory bandwidth also results in a proportionate increase in the number of pins of the switches. An increase in the number of pins disadvantageously require more area on the circuit board, resulting in greater package cost.
SUMMARY OF THE INVENTION
There is need for cascading a plurality of multiport switches to increase port density, while minimizing system latency. There is also a need to increase memory bandwidth of the cascaded switch arrangement without increasing the number of pin counts.
These and other needs are obtained by the present invention, where a plurality of switch modules transfer frame data of a corresponding received frame as data units. The memory interface enables the transfer of data units between the multiport switch modules and a shared memory system, increasing the overall bandwidth between the memory system and the multiport switch module by the simultaneous access of multiple memories for transfer of multiple data units for a respective packets.
One aspect of the present invention provides a switching system. The switching system includes a plurality of buffer memories, and a plurality of multiport switch modules. Each multiport switch module includes a memory interface configured for outputting a data unit of a corresponding data frame being received, to one of a corresponding one of the buffer memories and another one of the multiport switch modules. The multiport switch modules are configured for supplying a group of the data units to the plurality of buffer memories, simultaneously during said each memory access cycle according to a prescribed access protocol.
Since each of the multiport switch modules supply the data units of the corresponding receive data frame to the plurality of buffer memories, each buffer memory may store frame data for different multiport switch modules. Moreover, the transfer of the data units according to prescribed access protocol enables concurrent and simultaneous access of all the buffer memories, enabling a higher overall effective memory bandwidth between the multiport switch modules and the plurality of buffer memories. One exemplary embodiment of this aspect involves transfer of the data units between memory interfaces according to prescribed access protocol, enabling the switch module to fully optimize data transfer between the multiport switch modules and buffer memories. Another exemplary embodiment of this aspect uses a distributed memory interface, which receives the data units each of the multiport switch modules and stores the data units in the buffer memories according to the prescribed access protocol. Hence, the memory bandwidth is substantially increased without increasing the pin count of the switch modules.
Another aspect of the present invention provides a method of storing data frames received by respective network switch modules. The method comprises scheduling in each network switch module a transfer of a data unit of a corresponding data frame being received,
Advanced Micro Devices , Inc.
Vanderpuye Kenneth
LandOfFree
Apparatus and method for simultaneously accessing multiple... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Apparatus and method for simultaneously accessing multiple..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Apparatus and method for simultaneously accessing multiple... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3263754