Data processing: database and file management or data structures – Database design – Data structure types
Reexamination Certificate
1998-07-17
2002-01-29
Mizrahi, Diane (Department: 2771)
Data processing: database and file management or data structures
Database design
Data structure types
C707S793000, C709S216000, C711S140000, C711S216000
Reexamination Certificate
active
06343289
ABSTRACT:
COPYRIGHT NOTICE
Contained herein is material that is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction of the patent disclosure by any person as it appears in the Patent and Trademark Office patent files or records, but otherwise reserves all rights to the copyright whatsoever.
BACKGROUND OF THE INVENTION
1. Field of the Invention
The invention relates generally to the field of computer networking devices and database search optimization. More particularly, the invention relates to a method and apparatus for efficiently searching a forwarding database or similar data structure.
2. Description of the Related Art
A network device's performance is based on how quickly it can forward a packet. Before a network device, such as a bridge, a router, or a Layer
2
or Layer
3
switch, can forward a packet, typically, it must locate an entry in its forwarding database corresponding to a search key specified in the packet. As a result, the organization and searching mechanism of the forwarding database are critical to making a high-performance network device. As used herein, the term “forwarding database” is meant to refer broadly to any type of data structure for organizing data that is used in connection with forwarding packets (e.g., routing/bridging) from an ingress port of a network device to one or more egress ports of the network device.
Referring to
FIG. 1
, a prior forwarding database organization and search approach are briefly described. Forwarding database entries
110
are stored as part of a hash table
100
. In this example, each entry includes four double words (“d-words”), e.g., four 32-bit words, w
1
through w
4
. The entries also include a pointer to the next entry in the bin or a null pointer if the entry happens to be the last one in the bin. One or more of the four d-words may be used to store a key associated with the forwarding database entry and the remainder of the d-words contain data to facilitate forwarding.
An entry is associated with a bin in the hash table
100
based upon the hashing scheme employed and the key that is associated with the particular entry. A physical hash collision occurs when more than one entry to hashes to the same bin. Because the worst case for a forwarding database search is a search for an entry that turns out to be the last in a bin having many entries, much time has been spent worrying about how to avoid physical hash collisions. A typical scheme used to avoid such collisions involves the use of a hash table that includes more bins than the number of forwarding database entries. That is, the hash table has an index space that is greater than the number of entries that are expected to be stored in the table. In this manner, the probability that more than one forwarding database entry will reside in a given bin is decreased.
Supposedly having solved the problem associated with searching large bins by reducing the occurrence of physical hash collisions, this prior search approach optimistically assumes that the current entry being examined is, in fact, the matching entry. Consistent with this assumption, after having issued a request for the key associated with the first entry in the bin, the next memory access request is for data associated with the first entry. In this example, the memory accesses are labeled
1
through
6
. The first memory access
1
represents a request from memory for the first word, w
1
, of the first entry
111
. The typical memory access time is two or three clocks, depending upon the type of memory employed. Thus, the first word of data, w
1
, will be available two or three clocks from the time the request is made. However, this delay does not mean the search algorithm is idle during the lag between the request for data and its subsequent availability. Rather, taking advantage of the pipelined nature of modem memories, search schemes typically issue memory access requests on each clock until a match is located or the end of the bin is reached. Again, due to the optimistic nature of the prior search approach, the next memory access
2
is for the next d-word, w
2
, of the first entry
111
. The search continues to request data from the first entry
11
, e.g., memory access request
3
and
4
, until the entry's key has been compared to the search key and it has been determined whether or not the first entry
111
is the desired forwarding database entry. If the first entry
111
is not the desired entry, then a memory pipeline delay is incurred while the pointer for the next entry
112
is requested by memory access
5
.
While producing excellent best case performance, i.e., when the first entry is the desired entry, one disadvantage of this prior approach is that the worst case search is very expensive in terms of clock cycles. Each time this optimistic search approach encounters a forwarding database entry that is not the one desired, a memory pipeline delay is incurred as the search algorithm resets its pointers and loads the key from the next forwarding database entry. In general, the prior search approach illustrated seeks to optimize the best case. However, as a result of this short-sighted approach, the worst case search is actually made worse. Therefore, excellent best case performance is achieved by sacrificing the overall average search time.
In light of the foregoing, what is needed is a more thoughtful search approach that will reduce the overall average search time.
BRIEF SUMMARY OF THE INVENTION
A method and apparatus for efficiently searching a forwarding database or similar data structure are described. According to one aspect of the present invention, a request is made to load a first key from memory that is associated with a database entry. In a pipelined manner, a subsequent request is made to load a next key from memory that is associated with a next database entry. The load requests include outputting an address to the memory that corresponds to a location in memory storing the desired key. In any event, after receipt of the requested data from memory, it is determined whether the associated database entry is a matching entry by comparing the data to a search key. This process of loading subsequent keys is repeated until a predetermined condition is met. Advantageously, this search approach limits the search time required for the worst case scenario by assuming the comparison between the current forwarding database entry key and the search key will not result in a match and thereby reduces the overall average search time.
According to another aspect of the present invention, a network device may forward data more efficiently. Data is received at a first port of the network device and a search key is extracted from the data. A forwarding database is then searched for an entry that corresponds to the search key. Ultimately, the data is forwarded to a second port of the network device based upon a matching entry located by the search. The search includes retrieving keys from entries of the forwarding database and comparing the search key to the keys until a matching entry is located. The retrieval includes causing a pipelined memory in which the forwarding database is stored to access memory locations in an order that minimizes a worst case search of the forwarding database.
Other features of the present invention will be apparent from the accompanying drawings and from the detailed description which follows.
REFERENCES:
patent: 5521910 (1996-05-01), Mathews
patent: 5757924 (1998-05-01), Friedman et al.
patent: 5848257 (1998-12-01), Angle et al.
patent: 5996021 (1999-11-01), Civanlar et al.
patent: 6006264 (1999-12-01), Colby et al.
patent: 6157955 (2000-12-01), Narad et al.
patent: 6173384 (2001-01-01), Weaver
“Fast address lookups uing controlled prefix expansion” by V. Srinivasan et al., Washington University, St. Louis, ACM Transaction on Computer Systems, vol. 17, No. 1, pp. 1-40, (Feb. 1999).*
“Forwarding engine for fast routing lookups and updates” by Daxiao YU et all., Dept. of Elect. Engin., San Jose State Univ.,
Hunter Van A.
Momirov Milan
Blakely , Sokoloff, Taylor & Zafman LLP
Mizrahi Diane
Nortel Networks Limited
LandOfFree
Efficient search and organization of a forwarding database... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Efficient search and organization of a forwarding database..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Efficient search and organization of a forwarding database... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-2875626