Multiplex communications – Pathfinding or routing – Switching a message which includes an address header
Reexamination Certificate
2000-05-02
2004-07-06
Yao, Kwang Bin (Department: 2667)
Multiplex communications
Pathfinding or routing
Switching a message which includes an address header
C370S429000
Reexamination Certificate
active
06760338
ABSTRACT:
FIELD OF THE INVENTION
The present invention relates to computer network interfacing and switching, and more particularly, to an apparatus and method for cascading multiple multiport network switches to increase the number of ports in a network switching arrangement.
BACKGROUND ART
A multiport network switch in a packet switching network is coupled to stations on the network through its multiple ports. Data sent by one station on a network to one or more other stations on the network are sent through the network switch. The data is provided to the network switch over a shared access medium according to, for example, an Ethernet protocol. The network switch, which receives the data at one of its multiple ports, determines the destination of the data frame from the data frame header. The network switch then transmits the data from the appropriate port to which the destination network station is connected.
A single Ethernet network switch may have a number of 10/100 Mb/s ports, equaling, for example, 12 ports. The number of end stations connected to the single network switch is limited by the number of ports (i.e., port density) of the network switch. However, today's users of networking devices demand flexibility and scalability without such constraints. To address this need, manufacturers have developed modular architectures that enable cascading of identical networking devices or network switch modules. By cascading these equipment (or components) in a loop, port density can be readily increased without redesign or development of costly interfaces.
Unfortunately, as the number of cascaded switches increases, so does the system latency (i.e., the aggregate processing delay of the switches). This system latency is attributable in part by the manner in which the switches store and retrieve the data frames in memory. One traditional memory architecture employs individual, local memories for each cascaded switch, as shown in FIG.
1
. In this example, three multiport switches
2
a
,
2
b
, and
2
c
are cascaded together to permit the exchange of data frames received by any one of the switches and subsequent forwarding of the data frames out of a different multiport switch. These switches
2
a
,
2
b
, and
2
c
have a memory interface, e.g.,
4
a
,
4
b
, and
4
c
, respectively. These memory interfaces
4
a
,
4
b
, and
4
c
enable switches
2
a
,
2
b
, and
2
c
to access their respective memories
6
a
,
6
b
, and
6
b
to write and read the data frames.
For purposes of explanation, it is assumed that a data frame is received at a port (i.e., receive port) on switch
2
a
and that the data frame is destined for a node attached to a port on a different switch
2
c
. Switch
2
a
first stores the received data frame in memory
6
a
, and then determines whether to forward the received data frame out of its own port or send it to the next switch in sequence. Because the data frame is not destined to any port of switch
2
a
, the data frame is retrieved from memory
6
a
and sent to the next switch
2
b
via switch
2
a
's cascade port (i.e., the port to which the neighboring switches is connected). Upon receiving the data frame, switch
2
b
stores the data frame in memory
6
b
. Next, switch
2
b
examines the data frame and determines that it should be forwarded to switch
2
c
. Switch
2
b
forwards the data frame to switch
2
c
by reading the stored received data frame from memory
6
b
and sending it out its cascade port. When the data frame arrives at switch
2
c
, switch
2
c
writes the data frame into its memory
6
c
, in similar fashion as the other switches
2
a
and
2
b
. At this point, however, switch
2
c
determines that the data frame should be forwarded out one of its ports, which is connected to the destination node. Accordingly, switch
2
c
reads the stored data frame and forwards it out the appropriate port. As evident by this example, the data frame, as it is transferred from switch to switch is stored and read numerous times into the memories of the respective switches. The series of write and read operations impose cost delay in the switching system, and increases the cascade bandwidth requirement.
Hence, the delay in the switching system may cause the switch to be unable to process data packets fast enough relative to the network traffic, creating congestion conditions. In other words, the switch is no longer a non-blocking switch.
To address this latency problem, one proposed solution is to employ a common memory among the various switches.
FIG. 2
illustrates such a system in which switches
2
a
,
2
b
, and
2
c
share memory
7
via memory interfaces
4
a
,
4
b
, and
4
c
, respectively. Under this approach, each of the interfaces
4
a
,
4
b
, and
4
c
are required to have a wider data bus to maintain the speed of read and write accesses as compared to the individual memory arrangement of FIG.
8
. For example, the bus width of the memory interfaces
4
a
,
4
b
, and
4
c
may need to be increased. The main drawback with this common memory implementation is that the increase in memory bandwidth also results in a proportionate increase in the number of pins of the switches. An increase in the number of pins disadvantageously require more area on the circuit board, resulting in greater package cost.
SUMMARY OF THE INVENTION
There is need for cascading a plurality of multiport switches to increase port density, while minimizing system latency. There is also a need to increase memory bandwidth of the cascaded switch arrangement without increasing the number of pin counts.
There is also a need to provide a more efficient integration of a plurality of multiport switch modules by centralizing core switching functions without sacrificing latency in the multiport switch system.
These and other needs are obtained by the present invention, where a plurality of switch modules transfer frame data of a corresponding received frame, and a switching logic monitors the frame data during the transfers for centralized switching decisions between the switch modules. The memory interface enables the transfer of data units between the multiport switch modules and a shared memory system, increasing the overall bandwidth between the memory system and the multiport switch module by the simultaneous access of multiple memories for transfer of multiple data units for respective packets. Moreover, the monitoring by the switching logic enables switching decisions to be made as the frame data is distributed throughout the switching system.
One aspect of the present invention provides a switching system. The switching system includes a plurality of multiport switch modules, each configured for outputting frame data for a corresponding received data frame, a plurality of buffer memories, each coupled to a corresponding one of the multiport switch modules and configured for storing selected frame data of the data frames from the multiport switch modules, a shared data interface configured for receiving the frame data and the corresponding frame data from each of the multiport switch modules, and switching logic configured for monitoring at least a portion of the frame data received by the shared data interface and configured for selecting at least one of the buffer memories for storage of the frame data of each of the received data frames.
Since each of the multiport switch modules supply the frame data of the corresponding received data frame to the plurality of buffer memories, each buffer memory may store frame data for different multiport switch modules. Moreover, the shared data interface enables frame data to be distributed for concurrent and simultaneous access of all the buffer memories, enabling a higher overall effective memory bandwidth between the multiport switch modules and the plurality of buffer memories. The switching logic monitors the frame data as it is transferred by the multiport switch modules, enabling frame forwarding decisions to be made for all the multiport switch modules during single memory storage operation for that data frame. Hence, s
Merchant Shashank
Sang Jinqlih
Advanced Micro Devices , Inc.
Yao Kwang Bin
LandOfFree
Apparatus and method for monitoring data frames by a shared... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Apparatus and method for monitoring data frames by a shared..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Apparatus and method for monitoring data frames by a shared... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3201064