Techniques for efficient memory management for longest...

Electrical computers and digital processing systems: memory – Storage accessing and control – Specific memory composition

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C365S049130, C380S028000

Reexamination Certificate

active

06725326

ABSTRACT:

A portion of the disclosure of this patent document contains material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the patent and trademark office patent file or records, but otherwise reserves all copyright rights whatsoever.
BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates to memory management schemes for use in data communications, specifically packet routing.
2. Description of the Related Art
In routing of a packet in a modem data communication system, one of the many tasks to accomplish is to determine the longest Internet Protocol (IP) prefix match for a destination address derived from the packet header. Many systems exist for determining such matches, both in the context of Internet Protocol routing as well as other forms of routing and switching (such as those employed with the asynchronous transfer mode (ATM) communications). See for example, U.S. Pat. No. 5,809,501 to Noven, entitled
Method and System of Database Management and an Asynchronous Transfer Mode
(
ATM
)
Environment
and U.S. Pat. No. 5,884,297 also to Noven, entitled
System and Method for Maintaining a Table in Content Addressable Memory Using Hole Algorithms
. Both of these patents are incorporated herein by reference in their entireties.
Within routers and switches today, the process whereby the routing function is carried out is well known in the art. Part of that process is the step of determining whether or not a router is aware of the corresponding next hop destination for packet routing. This typically uses a lookup of certain elements of the packet or cell header in a table to determine packet routing. Ternary content addressable memory (TCAM) is a type of memory that supports such lookups, and in particular the longest binary prefix lookup commonly used in routers.
As is well-known in the art, a content addressable memory (CAM) lookup outputs an exact match: either the parameter to be looked up (i.e., the lookup key) is in the CAM table or it is not. If the key is found in the table, an exact match exists and the address in the CAM table where the entry was found is provided as the output of the lookup. TCAMs differ from CAMs in that a TCAM allows the user to mask off bits such that a lookup key is said to match a TCAM entry even if all bits in the key do not match the entry. In other words, by masking out certain bits in the TCAM as “don't care” bits, more than one lookup key can be said to match a TCAM entry. If several TCAM entries match the lookup key, the top-most one (i.e., the entry having the lowest address) is returned. This feature is useful in supporting the longest IP prefix lookup function because the object of the search is only to find the longest prefix match, not to find all exact matches.
Longest prefix lookups are commonly implemented using a TCAM by placing the prefix entries in descending order sorted by the length of prefix.
FIG. 1
illustrates a prior art TCAM-implemented lookup table
100
containing these sorted prefix entries
110
. The non-prefix bits of the IP address in the TCAM are masked off; the masked bits
150
are illustrated by dark cross-hatching. For Internet Protocol version 4 (IPv4), the IP addresses are 32 bits long: sorting these addresses into a TCAM thus creates a table having 32 different regions, where each region
1
consists of prefix entries of a length i, i being the number of unique bits in an entry.
FIG. 1
shows a representative prior art TCAM containing prefix entries
110
and holes
120
. Regions
1
are further identified each from the others by the number of unique prefix bits contained therein: region
1
-
32
is 32 unique bits wide; region
1
-
31
is 31 bits (with the low order bit
1
masked off by the TCAM); and so on through region
1
-
1
, which has the 31 low-order bits masked off. For example, entry
110
-
29
is in prefix region
1
-
29
and has 29 unique prefix bits and 3 bits masked off as “don't care”
150
.
The lowest address entry is
110
-
32
, in the 32-bit prefix region
1
-
32
. The highest entry (in the bottom-most position) in TCAM table
100
is hole
120
in region
1
-
1
.
FIG. 1
is only a representative illustration of a portion of a TCAM configured for longest prefix match lookups. One of ordinary skill in the art will readily understand that TCAMs of widths greater than 32 bits are commonly used and that the length (“depth”) of the TCAM can vary.
In operation, lookup key
105
is compared to every entry
110
in TCAM
100
. Note that hole
120
is not necessarily all zeros; in some embodiments of the present invention, the contents of a single bit (or a field having a plurality of bits) in the unmasked portion of the entry
110
signals that the entry is in fact a hole
120
. The comparison of key
105
to all entries in TCAM
100
is done simultaneously by a bit-wise NOT(XOR) with every unmasked bit in every entry. If an AND function of the results of all of the NOT(XOR) operations for an entry
110
is TRUE, the all bits of that entry
110
match key
105
. The address of the lowest-addressed (and therefore longest prefix) entry is provided to the TCAM output.
Note that in the upcoming standard Internet Protocol version 6 (IPv6), the IP address is 128 bits. IPv6 will thus necessitate a TCAM having 128 distinct IP prefix length regions.
Although the ability of the TCAM to rapidly produce longest prefix lookup matches is advantageous, it complicates the task of inserting and deleting new prefixes within the TCAM table. This is so because the TCAM table must be maintained in a prefix length sorted order. Though TCAM updates (inserts and deletes) are infrequent, a time consuming update mechanism steals valuable processor resources and potentially result in a dropping of packets. Such deleterious effects must be avoided if high packet switching/routing speeds are to be maintained. The problem is only exacerbated by the expected industry switch to the lPv6 architecture with its longer addresses.
An insert of a prefix of length i can be accomplished in one TCAM write cycle if there is a vacant entry in the i
th
region (e.g., hole
120
in region
1
-
32
), otherwise the TCAM write may need to move a vacant entry from a neighboring region (e.g., region
1
-
31
) in order to provide space in the i
th
region. In particular, if up to k regions on either side of the i
th
region have no vacant entries, then one would need to perform k moves to provide a space. Since the cost of a move (involving both reading and writing the entry) is much higher than that of a lookup, it is desirable to minimize the number of moves involved in TCAM updates.
In one prior art method commonly used, a “naive” algorithm for TCAM updates is employed. The naive (or “lazy”) algorithm moves TCAM entries only when it needs to. A delete in a given region
1
creates a hole
120
in that region.
Conversely, an insert in a region
1
requires the presence of a hole
120
in that region in which to insert the entry. A region is referred to as a “full” region when it does not have any holes (e.g., region
1
-
30
), and thus an insert into a full region requires moving a hole
120
into that region from the nearest non-full region
1
. This may necessitate moving entries in adjoining regions as well, in order to make room. It is desirable that the number of moves required to supply a hole to a full region is minimized. Indeed, it is preferable to have some holes in each region to facilitate updating.
Consider a TCAM with a certain maximum number of entries. One can readily see that some routing applications could use enough IP prefixes to result in a TCAM occupancy rate of 90% or more. In fact, such occupancy rates are commonly seen in the art today. A 90% occupancy rate implies that 10% of the TCAM entries are holes. A “good distribution” of these holes is one where they are almost equally distributed among the 32 regions (in an Pv4 system) so that one does not e

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Techniques for efficient memory management for longest... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Techniques for efficient memory management for longest..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Techniques for efficient memory management for longest... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3214599

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.