Memory organization in a switching device

Multiplex communications – Pathfinding or routing – Switching a message which includes an address header

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C370S395100, C370S389000

Reexamination Certificate

active

06493347

ABSTRACT:

BACKGROUND
The present invention relates generally to data routing systems, and more particularly to methods and apparatus for efficiently routing packets through a network.
In packet switched communication systems, a router is a switching device which receives packets containing data or control information on one port, and based on destination information contained within the packet, routes the packet out another port to the destination (or an intermediary destination).
Conventional routers perform this switching function by evaluating header information contained within a first data block in the packet in order to determine the proper output port for a particular packet.
Efficient switching of packets through the router is of paramount concern. Referring now to
FIG. 1
a
, a conventional router includes a plurality of input ports
2
each including an input buffer (memory)
4
, a switching device
6
and a plurality of output ports
8
.
Data packets received at an input port
2
are stored at least temporarily, in input buffer
4
while destination information associated with each packet is decoded to determine the appropriate switching through the switching device
6
. The size of input buffer
4
is based in part on the speed with which the destination information may be decoded. If the decoding process takes too long as compared to the rate at which packets are received, large sized memory elements may be required or packets may be dropped.
In addition, the size of input buffer may be influenced by a condition referred to as “blocking”. Packets may be forced to remain in the input buffer after the destination information is decoded if the switching device cannot make the connection. Blocking refers to a condition in which a connection cannot be made in the switch due to the unavailability of the desired output port (the port is busy, e.g., routing another packet from a different input port). In summary, the size of input buffer
4
is dependent on a number of factors including the line input rate, the speed of the look-up process, and the blocking characteristics for the switching device.
Unfortunately, conventional routers are inefficient in a number of respects. Each input port includes a dedicated input buffer and memory sharing between input ports is not provided for in the design. Each input buffer must be sized to meet the maximum throughput requirements for a given port. However, design trade-offs (cost) often necessitate smaller buffers for each port. With the smaller buffers, the possibility arises for packets to be dropped due to blocking conditions. While excess memory capacity typically exists in the router (due to the varied usage of the input ports), no means for taking advantage of the excess is afforded.
To minimize the occurrence of dropping packets, designers developed non head-of-line blocking routers. Referring now to
FIG. 1
b
, a conventional non head-of-line blocking router includes a plurality of input ports
2
each including an input buffer (memory)
4
, a switching device
6
and a plurality of output ports
8
each having an output buffer
9
. In order to provide non head-of-line blocking, each output port
8
is configured to include an output buffer
9
. Each output port could simultaneously be outputting packets as well as receiving new packets for output at a later time. As the size of the output buffer is increased, fewer packets are dropped due to head-of line blocking at input ports.
However, these designs are even more inefficient in terms of memory capacity and cost. Again, each output port includes a dedicated output buffer and memory sharing between output ports is not provided for in the design. Each output buffer must be sized to meet the maximum throughput requirements for a given port (in order to maintain its non head-of-line blocking characteristics). Even more excess memory capacity typically exists in the router (due to the varied usage of the input ports and output ports), yet no means for taking advantage of the excess is afforded. Twice the amount and bandwidth of memory has to be used than required to support the amount of data being moved through these types of devices.
SUMMARY OF THE INVENTION
In general, in one aspect, the invention provides a router for switching data packets from a source to a destination in a network. The router includes an input port for receiving a data packet and a physically distributed memory including two or more banks. Each memory bank includes a global data area for storing portions of the data packet. The router further includes an input switch for streaming across the memory banks uniform portions of the data packet, a controller for determining packet routing through the router, an output switch for extracting in order the portions of packet data stored in the global data area of each memory bank and forwarding the packet data to an appropriate output port and an output port for transferring the data packet to the destination.
In another aspect the invention provides a router for switching data packets from a source to a destination in a network in which the router includes a distributed memory. The distributed memory includes two or more memory banks. Each memory bank is used for storing uniform portions of a data packet received from a source and linking information for each data packet to allow for the extraction of the uniform portions of a data packet from distributed locations in memory in proper order after a routing determination has been made by the router.
Aspects of the invention include numerous features. The distributed memory includes an output queue for storing a notification indicative of the routing of the data packet through the router. The notification includes linking information for retrieving at least a first cell of the data packet from the distributed memory. The notification includes linking information for the first 5 cells of the data packet.
The notification includes an address for an indirect cell. The indirect cell is stored in the distributed memory and includes linking information for extracting cells in order from the distributed memory.
Each memory bank includes a global data area for storing portions of data packets and a notification area for storing notifications. The notification area is sized to be
{fraction (
1
/
5
)} of a size of the global data area for a given memory bank.
The router includes a plurality of multi-function multiports. Each multi-function multiport includes one or more input ports and output ports for receiving and transmitting data packets through the router. A portion of the distributed memory is located within each multi-function multiport such that each multi-function multiport includes a memory bank having a global data area and a notification area. The notification area of a given multi-function multiport stores notifications for data packets to be routed through an output port of the given multi-function multiport. Memory reads and writes to and from the distributed memory are sized to be 64 bytes.
The router includes a mapping means for mapping from a virtual address space to a physical address space associated with the distributed memory. The mapping means is used for detecting aged packets in memory and allowing for easy overwriting thereof such that garbage collection of aged packets is not required.
In another aspect, the invention provides a method of routing a data packet through a router in a system transmitting data packets between a source and a destination over a network including the router. The method includes receiving the data packet, dividing the data packet into a cells of a fixed size and storing the cells in a distributed memory. The distributed memory includes two or more memory banks. Consecutive cells from the data packet are stored in consecutive banks of the distributed memory. Linking information is stored in one bank of the memory for linking cells of the data packet that are stored throughout the distributed memory. The linking information is used for extracting the cells in order for transmission

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Memory organization in a switching device does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Memory organization in a switching device, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Memory organization in a switching device will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2939694

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.