Multiplex communications – Pathfinding or routing – Switching a message which includes an address header
Reexamination Certificate
2001-07-31
2004-11-23
Rao, Seema S. (Department: 2666)
Multiplex communications
Pathfinding or routing
Switching a message which includes an address header
C370S395100, C370S412000
Reexamination Certificate
active
06822959
ABSTRACT:
FIELD OF THE INVENTION
The present invention is related to the field of communications, and more particularly to integrated circuits that process communication packets.
BACKGROUND OF THE INVENTION
Many communication systems transfer information in streams of packets. In general, each packet contains a header and a payload. The header contains control information, such as addressing or channel information, that indicates how the packet should be handled. The payload contains the information that is being transferred. Some examples of the types of packets used in communication systems include, Asynchronous Transfer Mode (ATM) cells, Internet Protocol (IP) packets, frame relay packets, Ethernet packets, or some other packet-like information block. As used herein, the term “packet” is intended to include packet segments.
Integrated circuits termed “traffic stream processors” have been designed to apply robust functionality to high-speed packet streams. Robust functionality is critical with today's diverse but converging communication systems. Stream processors must handle multiple protocols and inter-work between streams of different protocols. Stream processors must also ensure that quality-of service constraints, priority, and bandwidth requirements are met. This functionality must be applied differently to different streams, and there may be thousands of different streams.
Co-pending applications Ser. No. 09/639,966, 09/640,231 and 09/640,258, the content of which is hereby incorporated herein by reference, describe a integrated circuit for processing communication packets. As described in the above applications, the integrated circuit includes a core processor. The processor handles a series of tasks, termed “events”. Most events have an associated service address, “context information” and “data”. When an external resource initiates an event, the external resource supplies the core processor with a memory pointer to “context” information and also supplies the data to be associated with the event.
The context pointer is used to fetch the context from external memory and to store this “context” information in memory located on the chip. If the required context data has already been fetched onto the chip, the hardware recognizes this fact and sets the on chip context pointer to point to this already pre-fetched context data. Only a small number of the system “contexts” are cached on the chip at any one time. The rest of the system “contexts” are stored in external memory. This context fetch mechanism is described in the above referenced co-pending applications.
In order to process an event, the core processor needs the service address of the event as well as the “context” and “data” associated with the event. The service address is the starting address for the instructions used to service the event. The core processor branches to the service address in order to start servicing the event.
Typically, the core processor needs to access a portion of the “context” associated with the event so the appropriate part of the “context” is read into the core processor's local registers. When this is done, the core processor can read, and if appropriate modify, the “context” values. However, when the core processor modifies a “context” value, the “context” values stored outside of the core processor register must be updated to reflect this change. This can happen under direct programmer control or using the method described in the above referenced patent (U.S. Pat. No. 5,748,630). The “data” associated with an event is handled in a manner similar to that described for the “context”.
In the circuit described in the above references co-pending applications, the processing core performed a register read which returned a pointer to the context, data, and service address associated with the next event. The processing core then needed to explicitly read the context and data into its internal register set.
SUMMARY OF THE INVENTION
The present invention frees the core processor from performing the explicit read operation required to read data into the internal register set. The present invention expands the processor's register set and provides a “shadow register” set. While the core processor is processing one event, the “context” and “data” and some other associated information for the next event is loaded into the shadow register set. When the core processor finishes processing an event, the core processor switches to the shadow register set and it can begin processing the next event immediately. With short service routines, there might not be time to fully pre-fetch the “context” and “data” associated with the next event before the current event ends. In this case, the core processor still starts processing the next event and the pre-fetch continues during the event processing. If the core processor accesses a register which is associated with part of the context for which the pre-fetch is still in progress, the core processor will automatically stall or delay until the pre-fetch has completed reading the appropriate data. Logic has been provided to handle several special situations, which are created by the use of the shadow registers, and to provide the programmer with control over the pre-fetching and service address selection progress.
REFERENCES:
patent: 4727538 (1988-02-01), Furchtgott et al.
patent: 5566170 (1996-10-01), Bakke et al.
patent: 5726985 (1998-03-01), Daniel et al.
patent: 5805927 (1998-09-01), Bowes et al.
patent: 5920561 (1999-07-01), Daniel et al.
patent: 6078733 (2000-06-01), Osborne
patent: 6195739 (2001-02-01), Wright et al.
patent: 6373846 (2002-04-01), Daniel et al.
Lee, T. Andy, et al., “Low Power Data Management Architecture for Wireless Communications Signal Processing,” Stanford University, IEEE 1998, pgs. 625-629.
Galbi Duane E.
Lussier Daniel J.
Snyder II Wilson P.
Kind Keith
Mindspeed Technologies Inc.
Rao Seema S.
Scheibel Robert C
LandOfFree
Enhancing performance by pre-fetching and caching data... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Enhancing performance by pre-fetching and caching data..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Enhancing performance by pre-fetching and caching data... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3361474