Centralized look up engine architecture and interface

Electrical computers and digital data processing systems: input/ – Intrasystem connection – Bus interface architecture

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C710S316000, C711S147000, C711S148000

Reexamination Certificate

active

06772268

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates to external memory processor and interface providing high speed memory access for a switching router system. More particularly, the external memory processor and interface performs high speed memory lookups for one or more switching router processsors.
3. Description of Related Art and General Background
The unprecedented growth of data networks (e.g., corporate-wide Intranets, the Internet, etc.) as well as the development of network applications (e.g., multimedia, interactive applications, proprietary corporate applications, etc.) have resulted in creating a demand for higher network bandwidth capabilities and better network performance. Moreover, such demands are exacerbated by the advent of policy-based networking, which requires more data packet processing, thereby increasing the amount of work per packet and occupying processing resources. One approach to increase network bandwidth and improving network performance is to provide for higher forwarding and/or routing performance within the network.
Some improvements in routing performance are directed to enhancing processor throughput. Processor designers have been able to obtain throughput improvements by greater integration, by reducing the size of the circuits, and by the use of single-chip reduced instruction set computing (RISC) processors, which are characterized by a small simplified set of frequently used instructions for rapid execution. It is commonly understood, however, that physical size reductions cannot continue indefinitely and there are limits to continually increasing processor clock speeds.
Further enhancements in processor throughput include modifications to the processor hardware to increase the average number of operations executed per clock cycle. Such modifications, may include, for example instruction pipelining, the use of cache memories, and multi-thread processing. Pipeline instruction execution allows subsequent instructions to begin executing before previously issued instructions have finished. Cache memories store frequently used and other data nearer the processor and allow instruction execution to continue, in most cases, without waiting the full access time of a main memory. Multi-thread processing divides a processing task into independently executable sequences of instructions called threads and the processor, recognizing when an instruction has caused it to be idle (i.e., first thread), switches from the instruction causing the memory latency to another instruction (i.e., second thread) independent from the former instruction. At some point, the threads that had caused the processor to be idle will be ready and the processor will return to those threads. By switching from one thread to the next, the processor can minimize the amount of time that it is idle.
In addition to enhancing processor throughput, improvements in routing performance may be achieved by partitioning the routing process into two processing classes: fast path processing and slow path processing. Partitioning the routing process into these two classes allows for network routing decisions to be based on the characteristics of each process. Routing protocols, such as, Open Shortest Path First (OSPF) and Border Gateway Protocol (BGP), have different requirements than the fast-forwarding Internet Protocol (FFIP). For example, routing protocols, such as OSPF and BGP, typically operate in the background and do not operate on individual data packets, while FFIP requires IP destination address resolution, checksum verification and modification, etc. on an individual packet basis.
The IP fast forwarding problem is becoming harder as the amount of time allotted for processing on a per packet basis steadily decreases in response to increasing media transmission speeds. In an effort to alleviate this problem, many router and Layer-3 switch mechanisms distribute the fast path processing to every port in their chassis, so that fast path processing power grows at a single port rate and not at the aggregate rate of all ports in the box. It is clear that most of current solutions will run out of steam, as the faster media become the mainstream.
As processing speeds continue to increase, a burden is placed on memory space allocation, memory access speeds, as well as software maintenance for the memory access by the processor. A technique to provide more lookup performance is to pipeline the memory accesses of the operation briefly explained above. Each lookup pipeline stage has a dedicated portion of the search memory allocated to it, usually selected on SSRAM component boundaries. The base of the next array is an address into the next memory allocated to the next lookup pipeline. The first stage takes the first radix-4 nibble performs its lookup and either has a result, a miss or an address for the next stage. Subsequently each stage performs in this manner until a result is found or the key is exhausted causing an exception. For a 24-bit key, a radix-4 search engine can be designed using a 36-bit memory that provides up to 200 million lookups per second. To achieve this performance, at least eight memory components, each having its own address and data bus, are needed.
SUMMARY OF THE INVENTION
The present invention provides fast data processing chip to memory chip exchange, in a route switch mechanism having a plurality of data processors. The data processing and exchange approximates a ten gigabits per second transfer rate. A memory access processor and memory access interface for transferring data information to and from a plurality of SSRAM locations. The processor has a lookup controller for identifying a data request and locating the data requested from the SSRAM locations. The bus allows a data request and retrieval throughput from a routing processor to the memory access processor at a maximum rate, about 10 gigabits per second without substantial pipeline stalls or overflows.


REFERENCES:
patent: 5546562 (1996-08-01), Patel
patent: 5577204 (1996-11-01), Brewer et al.
patent: 5761455 (1998-06-01), King et al.
patent: 5805917 (1998-09-01), Sakurada et al.
patent: 5878240 (1999-03-01), Tomko
patent: 6085276 (2000-07-01), VanDoren et al.
patent: 6088771 (2000-07-01), Steely et al.
patent: 6101420 (2000-08-01), VanDoren et al.
patent: 6125429 (2000-09-01), Goodwin et al.
patent: 6393530 (2002-05-01), Greim et al.
patent: 6598130 (2003-07-01), Harris et al.
patent: 2002/0087807 (2002-07-01), Gharachorloo et al.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Centralized look up engine architecture and interface does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Centralized look up engine architecture and interface, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Centralized look up engine architecture and interface will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3307282

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.