Multiplex communications – Pathfinding or routing – Switching a message which includes an address header
Reexamination Certificate
1998-10-05
2002-10-22
Kizou, Hassan (Department: 2662)
Multiplex communications
Pathfinding or routing
Switching a message which includes an address header
Reexamination Certificate
active
06470021
ABSTRACT:
TECHNICAL FIELD
This invention relates generally to network switching devices. More particularly, this invention relates to switching architecture for computer network switches such as LAN (local area network) switches.
BACKGROUND OF THE INVENTION
A local area network (LAN) is a system for directly connecting multiple computers so that they can directly exchange information with each other. LANs are considered local because they are designed to connect computers over a small area, such as an office, a building, or a small campus. LANs are considered systems because they are made up of several components, such as cables, repeaters, switches, routers, network interfaces, nodes (computers), and communication protocols. Ethernet is one such protocol. Information is communicated through a LAN in frames transported within data packets. (“Frame” and “data packet,” while technically different, are often used interchangeably to describe data carrying the information.)
A LAN switch (or, more generally, a packet switch) is generally defined as a multi-port device that transfers data between its different ports based on the destination addresses and/or other information found in the individual packets it receives. Switches can be used to segment LANs, connect different LANs, or extend the collision diameter of LANs. Switches are of particular importance to Ethernet-based LANs because of their ability to increase network diameter. Additional background information on packet switches can be found in a number of references such as
Fast Ethernet
(1997) by L. Quinn et al.,
Computer Networks
(3rd Ed. 1996) by A.Tannenbaum, and
High-Speed Networking with LAN Switches
(1997) by G. Held, all of which are incorporated herein by reference.
There are three common switching architectures used in packet switches for forwarding frames from one port to another: crosspoint (also known as crossbar) matrix, shared bus, and shared memory. A crossbar matrix essentially creates a very transient “circuit” between ports for the duration of a frame (or subset of a frame) exchange. There is an electronic switch located at each crossbar in the matrix between every matrix input and output. A switch controller establishes a direct connection within the switch between two ports, based on the destination address and/or other information within a data packet acquired by the packet's entry port. The packet is then forwarded directly from the entry port (also referred to as the sending port) to an exit port (also referred to as a destination port). Latency through the switch is minimal since the entire frame carrying the packet need not be stored within the switch in the process of forwarding the packet. A drawback of the crossbar matrix architecture, however, is the head-of-line blocking that occurs when more than one entry port attempts to send data to the same exit port. The entry port at the head of the line has sole access to the exit port, and data transmission is delayed at the second entry port. This transmission delay can cause the input buffers of the second port to fill and possibly overflow, requiring data packets to be retransmitted to the entry port a number of times before they can be accepted.
A shared-bus architecture uses a common bus as the exchange mechanism for data packets between ports. Each port (or small group of ports) has its own memory, both for input and output queues, depending on the design. Like the crossbar matrix, the shared bus architecture is subject to blocking at busy ports.
A shared memory architecture uses a single common memory as the exchange mechanism for frames between ports. All ports access the shared memory via a shared memory bus. An arbitration mechanism, such as time division multiplexing, controls port access to the memory assuring each entry port a chance to store data that it receives within memory where the exit port can then access it. A problem with present shared memory architectures, however, is that they are not fast enough to transfer multiple gigabits of data per second from one port to another without blocking port access to memory. Such transfer rates are required for newer, full duplex gigabit packet switches for use in LANs, wide area networks (WANs).
A multi-gigabit packet switch that uses a conventional shared memory architecture might theoretically be achieved by making the shared memory very wide (in terms of bit width) to reduce the number of memory accesses. Wide memory, however, is physically large and often located on multiple printed circuit boards, and thus is difficult to communicate with it since signals must travel long distances to reach all of the pins.
Furthermore, all ports of a switch with a shared memory architecture must have access to the shared memory. Traditionally this access is provided by a time division multiplexing (TDM) memory bus that allocates times when each port controls the bus. This shared bus creates connectivity and signal integrity problems. Wiring a wide bus together across multiple printed circuit boards requires large and expensive connectors and printed circuit cards. And the multiple ports directly connected to the bus cause electrical noise that reduces the maximum frequency the bus can run—effectively limiting the number of ports the switch can support.
An objective of this invention, therefore, is to provide an improved shared memory architecture for a switching device that allows for all ports to gain access to the shared memory at a bandwidth of a gigabit or greater, without being blocked or suffering the other problems noted above.
SUMMARY OF THE INVENTION
A switching device in accordance with the invention includes ports, shared memory, and memory subsystems for routing data between the ports and the shared memory. In one aspect of the invention, each port has its own signal paths that may carry fragments of a data stream between the port and each memory subsystem. The aggregate memory system access is thus very wide (the number of ports times the size of the busses to/from memory for each port), but each port's access is relatively small.
Each port's access to individual memory subsystems is smaller still—only a fragment of the port's data stream is sent to each subsystem. This splitting of the data stream among multiple memory subsystems reduces the bandwidth required for each port through a given memory subsystem. For example, if there are four memory subsystems, a port's memory access bus is fragmented into four pieces, with only ¼ of the port's bandwidth sent through each memory subsystem.
In another aspect of the invention, each memory subsystem may include storage for each port for storing data stream fragments received from the port and a time division multiplexer for selecting among the stored data stream fragments from the ports. The multiplexer is part of a shared TDM bus within the memory subsystem, which avoids the electrical noise problems of running the shared bus directly from memory to each port. The TDM bus of each memory subsystem need support only some fraction of a port's bandwidth times the number of ports. Unlike the prior approach of a single, wide TDM bus directly connecting the ports to shared memory, the invention scales well for high port-count gigabit switches. Each additional port adds only a fraction of a gigabit bandwidth burden on each memory subsystem.
In yet another aspect of the invention, the memory may store data stream fragments selected by the memory subsystems. In storing data stream fragments in memory, the memory subsystems each select in parallel the stored data stream fragment from the same port. For example, all memory subsystems may be storing data stream fragments from ports
0
through
10
. To store data from port
0
, all memory subsystems select their data stream fragment from port
0
. Similarly, in retrieving data from memory, the memory subsystems each select in parallel the same port to receive data stream fragments being read from memory.
These and other aspects, features, and advantages of the invention
Couch David K.
Daines Bernard N.
Davis Greg W.
Hammond Thomas J.
Schalick Christopher A.
Alcatel Internetworking (PE), Inc.
Kizou Hassan
Yin Lu
LandOfFree
Computer network switch with parallel access shared memory... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Computer network switch with parallel access shared memory..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Computer network switch with parallel access shared memory... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-2938603