Multiplex communications – Pathfinding or routing – Switching a message which includes an address header
Reexamination Certificate
2000-05-12
2002-01-29
Phan, Trong (Department: 2819)
Multiplex communications
Pathfinding or routing
Switching a message which includes an address header
C370S401000
Reexamination Certificate
active
06343078
ABSTRACT:
FIELD OF THE INVENTION
The present invention relates generally to data communications networks and more particularly relates to an apparatus for and a method of compressing forwarding decisions in a network device.
BACKGROUND OF THE INVENTION
Currently, the number of data networks and the volume of traffic these networks carry are increasing at an ever increasing rate. The network devices that make these networks generally consist of specialized hardware designed to move data at very high speeds. Typical networks, such as Ethernet based networks, are mainly comprised of end stations, Ethernet hubs, switches, routers, bridges and gateways. ATM networks are constructed with similar network devices adapted to carry ATM traffic, e.g., ATM capable end stations, edge devices and ATM switches.
With the ever increasing user demand for faster and faster data communications, network devices have had to perform at higher and higher speeds. A primary function of many network devices is to receive frames (packets, cells, etc.) at one or more ingress ports, and forward the frame to the appropriate egress port. Accomplishing this requires that the network device make a forwarding decision about the received frame. This requires processing and memory resources to store the frame until the forwarding decision is complete and the packet is placed onto the appropriate output ports(s).
In particular, when a frame (note that the same applies to cells and packets as well), arrives at a switch, for example, typically only its header is examined and a forwarding decision is made therefrom. The application of the decision to the frame requires that the entire frame be queued to an output port. In some implementations, the frame is not scheduled for transmission until the complete frame is successively received. With this scheme, new frames destined to the same port may potentially be blocked until the previous frame is received in its entirety and is transmitted.
Moreover, if this frame is determined to be illegal, i.e. a transmission error occurred while receiving, it must be removed from the queue, which is a time consuming operation. Alternatively, considering many types of switching architectures used today, deferring the forwarding decision to after the entire frame has been received is problematic because the switch must store the frame header for a relatively long time until the entire frame is received. During this time, thousands of frames may be received until the entire frame is received and the forwarding decision made. This requires a very large amount of memory space that is typically not available in ASICs.
In prior art store and forward schemes, the frame is not queued into the output port until the entire frame is received. The forwarding decision is made in an input server after receipt of the header portion. The frame, however, is not queued to an output port until the entire frame is received. Thus, the frame header and the forwarding result must be stored until the complete frame is received.
For example, consider a network device with 64 output ports. For each frame received, a forwarding decision must be made to determine which of the output ports to forward the frame to or whether to drop the frame altogether. The decision to forward to one or more ports is indicated in a 64 bit output port vector whereby a bit is set corresponding to each port the frame is to be output to. If the forwarding decision is made after receipt of the header, than the 64 bit output port vector must be stored in a memory queue until the entire frame has been received. Once received, the output port vector is retrieved from memory and the frame is directed to the output ports indicated in the corresponding output port vector.
A disadvantage of this approach is that depending on the number of entries in the forwarding table (including the memory in the input servers and other related queues), the output port vector information can potentially consume a very large amount of memory. For example, consider a forwarding table memory having 32K entries in a network device incorporating 16 ports. The amount of memory required just to hold the output port vector information totals 32K×16 bits. The problem is compounded considering a network device incorporating 64 ports. In this case, the memory space required for storing output port vector information only is 32K×64 bits. An increase by a factor of four. Placing such a large quantity of memory in the network device is prohibitive in terms of increased cost, increased complexity, increased physical size, reduction in reliability, increased difficulty in manufacturing and test, etc. The problem is compounded as the number of ports in a device increases.
Thus, it is desirable to be able to store, in a network device, output port vector information associated with the forwarding decision made for a received frame, without the need to incorporate large amounts of memory.
SUMMARY OF THE INVENTION
The present invention provides an apparatus for and a method of compressing the forwarding decision for a frame within a network device. The invention enables a huge reduction in the amount of memory required to store forwarding decisions. The memory savings made possible by the present invention increase as the number of ports on the network device represented by the forwarding decision increases.
For illustration purposes, the principles of the present invention are described in the context of an example network device comprising an ATM edge device comprising a plurality of Ethernet ports (e.g., 64 ports) and ATM ports (e.g., 2 ports). Thus, the example device is simultaneously connected to an Ethernet network and an ATM network.
The invention is operative to compress a forwarding decision in the form of a forwarding pointer that occupies far less memory space than the corresponding output port vector. The compressed forwarding pointers are stored in a forwarding table that is accessed using a hash function. A forwarding CAM is used to resolve conflicts in the hash table. The output port vectors are stored in an output port vector table that comprises a relatively small number of possible combinations of the port vector. For example, considering 64 output ports, the port vector table may comprise only 1024 entries. Thus, software in the network device configures the port vector table with output port vectors that are in use at the moment. Thus, only the output port vectors corresponding to frames that may be received need to be stored in the port vector table.
In operation, a forwarding decision is made for each received frame by a forwarding processor in the device. The forwarding decision is represented by a compressed forwarding pointer that is stored in a table and associated with the received frame. At some later point in time, the frame is output to one or more destination ports in accordance with a corresponding output port vector. At this time, the compressed forwarding decision is expanded to an output port vector using the relatively small port vector table. In this fashion, large memories to store the actual output port vector are not needed since only a relatively short pointer to the port vector is stored rather than the port vector itself.
Optionally, a second intermediate stage lookup may be employed whereby the compressed forwarding pointer is first mapped to a forwarding extension pointer in accordance with the value of the compressed forwarding pointer. This can be used to perform additional mapping in accordance with the frame type, e.g., Ethernet unicast, ATM unicast, multicast, MPOA, etc. The second level pointer generated is then used to lookup the actual output port vector.
There is provided in accordance with the present invention a method of compressing a forwarding decision in a network device having a plurality of ports, the method comprising the steps of receiving a protocol data unit over a receive port, storing a plurality of output port vectors in a port vector table having a width corresponding to the number of output port dest
Bronstein Zvika
Dosovitsky Gennady
Schzukin Golan
Shimony Ilan
Yaron Opher
3Com Corporation
Phan Trong
Zaretsky Howard
LandOfFree
Compression of forwarding decisions in a network device does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Compression of forwarding decisions in a network device, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Compression of forwarding decisions in a network device will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-2838500