Early availability of forwarding control information

Multiplex communications – Pathfinding or routing – Switching a message which includes an address header

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C370S466000, C370S473000

Reexamination Certificate

active

06320859

ABSTRACT:

COPYRIGHT NOTICE
Contained herein is material that is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction of the patent disclosure by any person as it appears in the Patent and Trademark Office patent files or records, but otherwise reserves all rights to the copyright whatsoever.
BACKGROUND OF THE INVENTION
1. Field of the Invention
The invention relates generally to the field of computer networking devices. More particularly, the invention relates to a pipelining mechanism which provides for the early availability of switching information for purposes of increasing switch fabric throughput.
2. Description of the Related Art
With Local Area Network (LAN) switches now operating at data transfer rates of up to 1 Gigabit per second (Gbps), switching capacity in terms of switch fabric throughput is of critical importance.
As used herein, the terms Ethernet, Fast Ethernet, and Gigabit Ethernet shall apply to Local Area Networks (LANs) employing Carrier Sense, Multiple Access with Collision Detection (CSMA/CD) as the medium access method, generally operating at a signaling rate of 10 Megabits per second (Mbps), 100 Mbps, and 1,000 Mbps, respectively over various media types and transmitting Ethernet formatted or Institute of Electrical and Electronic Engineers (IEEE) standard 802.3 formatted data packets.
With reference to the simplified block diagram of
FIG. 1
, an approach for managing access to a switch memory will now briefly be described. Switch
100
includes a switch memory
110
coupled to a plurality of port interfaces
105
-
108
and a memory manager
115
. The switch memory
110
may temporarily buffer packets received from the port interfaces
105
-
108
until the one or more ports to which the packets are destined are prepared to transmit the data. In this example, the memory manager
115
may coordinate the allocation of portions (buffers) of the switch memory
110
for packet storage and maintain a mapping of some sort to associate the buffers with a port (e.g., the source or destination port). That is, the buffers of the switch memory
110
may be physically or logically organized to facilitate data storage and retrieval. In any event, the memory manager
115
may additionally arbitrate the interface between the port interfaces
105
-
108
and the switch memory
110
. For instance, the memory manager
115
may employ various mechanisms to determine which of the port interfaces
105
-
108
have data to transfer into the switch memory, which of the port interfaces
105
-
108
are prepared to receive data from the switch memory, and which port interface
105
-
108
may access the switch memory
110
during a particular clock cycle.
In order to maintain the integrity of the physical or logical organization of the switch memory
110
, it is typically necessary for the memory manager
115
to examine a portion of the packet (e.g., the source or destination port, and/or other packet forwarding control information) prior to storing the packet data in the switch memory
110
. For example, the memory manager may need to determine where to store the packet data based upon the packet's contents and if space is available in the particular physical or logical bin to which the packet data maps. Therefore, the memory manager
115
is provided with access to a subset, M, of the N data lines comprising data bus
120
. In this manner, the memory manager
115
may determine the appropriate processing required for storing the packet data on data bus
120
.
Several difficulties arise using the above switch memory access approach. For instance, the processing required to be performed by the memory manager
115
on the portion of the packet may require more than one memory clock cycle to complete. If this is the case, two options would appear to be available in this switch architecture. The first option is to slow down the memory clock relative to the clock domain of the memory manager
115
such that the memory manager
115
is able to complete its worst case processing within a memory clock cycle. The second option is to simply read the packet data twice, once to provide the packet data to the memory manager
115
and a second time, after the memory manager
115
has completed its processing, to transfer the packet data from the port interface
105
-
108
to the switch memory
110
. However, both of these options result in the inefficient utilization of the data bus
120
. Consequently, the packet forwarding rate through the switch memory
110
is negatively impacted.
BRIEF SUMMARY OF THE INVENTION
A method and apparatus for increasing the throughput and forwarding rate of a switch fabric are described. According to one aspect of the present invention, a packet forwarding device includes a plurality of port interface devices (PIDs), memory access circuitry, and a management device. The PIDs are configured to fragment packets into cells. A portion of a cell serves as forwarding information. The memory access circuitry is coupled to the PIDs to receive cell data. The memory access circuitry includes a data interface that outputs the cell data and an independent interface that outputs the forwarding information. A memory is coupled to the data interface of the memory access circuitry to temporarily store the cell data received from the memory access circuitry. A management device is coupled to the independent interface of the memory access circuitry to receive the forwarding information. The management device employs the forwarding information to organize cells into one or more groups. Advantageously, this architecture allows highly pipelined memory access processing.
According to another aspect of the present invention, a method is provided for forwarding data through a network device. A first port interface of the network device receives a packet. The first port interface transfers a cell, including data from the packet and forwarding information associated with the packet, to an intermediate device. The intermediate device outputs the cell to a memory via a data path and outputs the forwarding information to a queue managing device via a separate and independent path. Based upon the forwarding information, the queue managing device determines an output queue with which to associate the cell. Subsequently, the cell is transferred to a second port interface device associated with the output queue. Advantageously, this approach allows the memory access processing to be partitioned into two independent processes with overlapping execution. For example, the first process may make a determination as to which output queue a cell is to be associated while a second process may concurrently perform the actual writing and/or reading of cells to/from the memory.
Other features of the present invention will be apparent from the accompanying drawings and from the detailed description which follows.


REFERENCES:
patent: 5459724 (1995-10-01), Jeffrey et al.
patent: 5905725 (1999-05-01), Sindhu et al.
patent: 6067595 (2000-05-01), Lindenstruth et al.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Early availability of forwarding control information does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Early availability of forwarding control information, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Early availability of forwarding control information will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2612940

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.