Context based cache indexing

Electrical computers and digital processing systems: memory – Storage accessing and control – Specific memory composition

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C711S100000, C711S118000

Reexamination Certificate

active

06820170

ABSTRACT:

BACKGROUND OF THE INVENTION
The present invention relates to data processing, and more particularly to accessing from a cache, data related to networks.
It may be desirable to process data when data are transferred in networks. For example, a router may replace physical addresses (MAC-layer addresses) in a data packet before transmission. An ATM switch may replace cell headers. Further, a router, a bridge, or some other device that transfers data between networks may perform protocol related transformations if the network on which the data are received and the network on which the data are transmitted use different protocols. See, for example, PCT publication WO 95/20282 (27 Jul. 1995) incorporated herein by reference.
Such data processing can place a heavy burden on a memory that holds the data. To provide fast access to data held in memory, certain data that is frequently accessed may be temporarily stored in and retrieved from a cache. To allow any data to be stored at any location in a cache, many caches use content addressable memories (CAMs). A CAM is a well-known device that permits contents of the memory to be searched and matched instead of having to specify a memory location address in order to retrieve data from the memory.
Such a CAM can be used to accelerate any application requiring fast searching of a database, list, or pattern, such as in database machines, image or voice recognition, or computer and communication networks. CAMs are often used to store a routing table for high speed switching systems. These systems need to rapidly search the routing table to look for a matching destination address so that a data packet may be routed to the appropriate destination address.
A CAM is normally significantly slower than a random access memory (RAM). However, RAMs are not normally used to implement caches for holding network-related data because such caches would be sparsely populated, thereby resulting in a significant cost.
There is therefore a need for a fast cache to hold network-related data, but without significant cost.
Incorporated by reference herein in their entirety are the following references:
U.S. Pat. No. 6,307,860 granted to Joffe, et al. on Oct. 23, 2001, and entitled “Systems and methods for data transformation and transfer in networks”;
U.S. Pat. No. 6,330,584 granted to Joffe, et al. on Dec. 11, 2001, and entitled “Systems and methods for multi-tasking, resource sharing and execution of computer instructions”;
U.S. Pat. No. 5,901,147 granted to Joffe on May 4, 1999, and entitled “Apparatus and methods to change thresholds to control congestion in ATM switches”;
U.S. Pat. No. 6,128,278 granted to Joffe, et al. on Oct. 3, 2000 and entitled “Cell queuing in ATM switches”;
U.S. Pat. No. 6,366,978 granted to Middleton, et al. on Apr. 2, 2002, and entitled “Cache memory”;
U.S. Pat. No. 6,378,042 granted to Henderson, et al. on Apr. 23, 2002, and entitled “Caching associative memory”;
An article entitled “A Virtual Cache Architecture for Retaining the Process Working Sets in a Multiprogramming Environment”, IEICE TRANS. INFSYST., VOL. E79-D, NO. 12 DEC. 1996 by Dongwook KIM et al.
SUMMARY
In some embodiments of the invention, a content addressable memory (CAM) is used in the normal manner to address a cache, but a table is used prior to use of the CAM. In several embodiments, the table is indexed using an identity of a task (in a multitasking system), rather than a portion of an address of the data being sought. Use of a task's identity ensures that the size of the table is related to (e.g. equal to) the maximum number of tasks, which eliminates the conventional problem of size in using direct mapped caches.
A value supplied by the table (based on the task's identity) is compared to an address of the data being sought. If the address matches the value from the table, there is a hit, and data is immediately fetched from the cache. If there is a miss in the table, then the CAM is used, to check if the data is present in the cache. If there is a hit in the CAM, the data is fetched from the cache. In case of a miss in the CAM, the data is not present in the cache, and the data must be fetched from another device (such as a second level cache or memory), and placed in the cache.
Use of two addressing structures, namely the above-described table and the above-described CAM, together to access the same cache is advantageous in certain types of applications, such as tasks in a network processor that access network-related data in the cache. In such applications, data for a cache line needs to be initially loaded, but once loaded any data in the cache line is quickly accessed using the table, if the same task repeatedly accesses the same cache line (e.g. when processing a packet or cell of traffic in a network).
Although a specific table has been described above for some embodiments, other embodiments may employ a different RAM-based addressing structure that also uses a task identifier, such as a linked list. Furthermore, instead of using a task identifier, other embodiments may use any other indicator of the context in which access to a cache is being made, such as an external memory pointer. For example, each task may be allocated a number of external memory pointers (which may be fixed in number, or software programmable), and each external memory pointer (or a portion thereof) may be used along with (or instead of) the task identifier to check if the data is present in the cache.
Therefore, in some embodiments, two attempts are made to identify an entry in the same cache: a first attempt uses a RAM-based addressing structure (such as the above-described table) and a second attempt (on failure of the first attempt) uses a CAM-based addressing structure. The RAM-based addressing structure is faster than the CAM based-addressing structure, and in cases of a hit on the first attempt, cache performance is based on RAM cycle time rather than CAM cycle time. On the other hand, a miss on the first attempt does not mean that the data is not present in the cache. Instead, a second attempt using the CAM in the traditional manner finds the data if present in the cache.


REFERENCES:
patent: 5283882 (1994-02-01), Smith et al.
patent: 5428565 (1995-06-01), Shaw
patent: 5550995 (1996-08-01), Barrera et al.
patent: 5574875 (1996-11-01), Stansfield et al.
patent: 5581722 (1996-12-01), Welland
patent: 5860095 (1999-01-01), Iacobovici et al.
patent: 5901147 (1999-05-01), Joffe et al.
patent: 6014732 (2000-01-01), Naffziger
patent: 6128278 (2000-10-01), Joffe et al.
patent: 6215685 (2001-04-01), Fung et al.
patent: 6307860 (2001-10-01), Joffe et al.
patent: 6330584 (2001-12-01), Joffe et al.
patent: 6366978 (2002-04-01), Middleton et al.
patent: 6378042 (2002-04-01), Henderson et al.
patent: 6564301 (2003-05-01), Middleton
patent: 6625714 (2003-09-01), Lyon
patent: WO 95/20282 (1995-07-01), None
Dongwook Kim et al., “A Virtual Cache Architecture for Retaining the Process Working Sets in a Multiprogramming Environment”, IEICE Trans. Infsyst., vol. E79-D, No. 12 Dec. 1996.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Context based cache indexing does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Context based cache indexing, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Context based cache indexing will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3304466

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.