Method and apparatus for high-speed longest prefix and...

Electrical computers and digital processing systems: multicomput – Computer-to-computer data routing

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C709S245000, C709S242000, C710S108000, C370S392000

Reexamination Certificate

active

06631419

ABSTRACT:

TECHNICAL FIELD
The present invention relates generally to data communications, and more particularly to systems and methods of routing or monitoring network data according to routing rules.
BACKGROUND OF THE INVENTION
In connectionless networks like the Internet, each router along the packet path from a source must make an independent decision on how to forward a packet closer to its destination. Each such router and switch must also make an independent decision on how to allocate its resources (buffer memory, fabric capacity, link bandwidth, etc.) when faced with competition among packets for these resources. A system makes this decision by examining the initial portion (“header”) of the packet, and from fields contained within the header, determining the appropriate local action for the packet.
The speed (i.e., data throughput) of a router can be limited by one of many factors. A first factor is the data interface of the device. The data interface is that portion of the device that is connected to the data transmitting media. Presently, with fiber optic and other technologies, data transfer rates of 10 gigabits per second (Gbits/s) and greater can be achieved. A second factor is the “switch matrix” within a router. The switch matrix enables the physical data path between the interfaces of a device, and typically includes a number of integrated circuits (chips) connected by one or more buses. Advances in scheduling algorithms (to determine priority and timing of data paths) as well as improvements in interconnect technology (both chip-to-chip and on-chip) currently allow for transfer rates of hundreds of Gbits/s. However, this high data transfer rate given for a switch matrix assumes that router has already determined the correct output port for the data packet. This leads to a third, and arguably the most difficult factor involved in scaling up router throughput: the lookup engine.
The lookup engine searches a table of “rules” which define sets of possible values for the various header fields of interest. (What we call a “rule” is also known variously as a route (when it specifies destination-prefix only), filter (in a firewall environment), traffic class, aggregation rule, etc.). These rules are derived from routing protocols, manual configuration, or other means. While a single rule may specify an exact packet header, more typically it specifies a range of values. The lookup engine searches the table, using the packet header information as a search key. The result of the search (“associated data”) tells the system where and how to forward the packet.
Were destination addresses all uniform, the lookup operation would require a simple “exact-match” search, for which there are many well-known approaches. Unfortunately, this is not the case. Some protocols, such as the Internet protocol (IP), require “longest prefix matching” or “masked prefix matching” lookup operations.
The problem of longest prefix matching may be best understood by example. Accordingly a two rule example for a 32-bit address is given below. Addresses are written by separating “octets” (8-bit data sections) by periods. Thus, the addresses 192.9.0.0 is equal to (11000000 00001001 00000000 00000000). The two routing rules are
192.9/16

Next hop “A”
192.9.4.0/26

Next hop “B.”
The first rule requires that the first 16-bits of the 32-bit search key be examined, and if they are equal to 192.9, the packet will be forwarded to a next hop (port or interface) identified as A. However, within the same address range is a subset of the search key that includes the same first 16-bits as the first rule, but further includes ten more bits for comparison. Thus, the second rule requires that the first 26-bits of the search key to be examined. If the first 26-bits is equal to 192.9.4.0, the packet will be forwarded to the next hop identified as B. IP and IPX routing, as well as other applications, require that the longest prefix match (26-bit in the example) take precedence over a shorter prefix match (16-bit in the example). Thus, a longest prefix matching capability is a necessary function for these router applications.
The masked-prefix matching problem is a generalization of longest-prefix matching. In longest-prefix matching, each rule “examines” a contiguous set of bits in the search key, starting from the most-significant bit. The 192.9/16 rule above, for example, examines the high 16 bits of the search key and tests against the value 192.9. In masked prefix matching, each rule examines a subset of the bits in the search key, but the subset does not have to be contiguous, nor does it have to begin at the most-significant bit. For example, a masked-prefix rule may take the form
192.9.×.0/26→Next hop “C”
which would examine the first 16-bits, ignore the next 8-bits, and examine and the next 2-bits of the search key. A match is successful only if the first 16-bits are equal to 192.9 and the 25
th
and 26
th
bits are equal to 00. Any don't-care bits within the prefix we call “gaps” in the prefix. An ordinary longest-prefix match does not permit gaps. Note that although the above example shows a gap of exactly 8 bits and aligned on an 8-bit boundary, in a masked-prefix rule gaps or don't care bits may appear in any bit position(s) within the prefix.
A number of approaches have arisen to address the longest prefix matching problem; some work, though much less, has been done on masked prefix matching. A common technique, because of the complexity of longest prefix and masked prefix searches, involves caching the results of recent complex lookup operations in an exact-match search table, such as a binary content addressable memory (CAM) or a hash table. If a packet header arrives that exactly matches one of the cached lookup operations, there is a cache hit, and the complex lookup operation for that particular destination address is avoided. Such an approach can be useful where the number of different addresses handled by the router is limited. However, in the event the router must handle a large number of addresses, the use of cached longest prefix match results is not practical. Further, this approach exhibits “traffic dependency”: its performance is dependent on the assumption that the router will need to look up the a small number of unique packet header values over and over again, which is not true for many traffic patterns. Finally, although caching can reduce the load on the longest prefix matching or masked prefix matching system, it still relies on an underlying complex search for any header which is not found in the cache, so the problem of fast hardware searching is still present.
A second technique that relies on specialized hardware involves the use of specialized memory circuits. Such memories are typically variations on standard CAMs, and include internal circuitry capable of performing a longest prefix match operation or other masked searches. Such specialized CAMs have drawbacks, however, in that they can be more expensive than commodity memory devices, consume more power, and are limited in their density. In addition, because of specialized semiconductor process requirements, such CAM variations can be difficult to integrate with logic circuits, in the event the CAM variation is to be “embedded” to form a single integrated circuit.
Other prior art approaches in the literature include “software-oriented” and “hardware-oriented” search algorithms.
Software-oriented algorithms are designed to run on conventional computer system platforms. “Patricia—a practical algorithm to retrieve information coded in alphanumeric”
Journal of the ACM
, v15, #4, October 1968, pp. 515-534, by Morrison, discloses an algorithm (named “Patricia”) that is utilized by many routers, in one variation or another, for performing the lookup function. A drawback to the Patricia algorithm is the number of memory accesses that are required for the system running the algorithm. In a worst-case lookup, 32 memory accesses are required. For the average lookup case, between five and ten memory accesses are

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Method and apparatus for high-speed longest prefix and... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Method and apparatus for high-speed longest prefix and..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method and apparatus for high-speed longest prefix and... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3111258

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.