Method and system for pre-fetch cache interrogation using...

Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C711S137000, C711S146000, C711S168000

Reexamination Certificate

active

06202128

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Technical Field
The present invention is directed to an improved data processing system and in particular to an improved data cache array for utilization in a data processing system. Still more particularly the present invention relates to an improved method and system for efficient pre-fetch performance and normal load performance of a microprocessor memory system.
2. Description of the Related Art
Many systems for processing information include both a system memory and a cache memory. A cache memory is a relatively small, high-speed memory that stores a copy of information from one or more portions of the system memory. Frequently, the cache memory is physically distinct from the system memory. Such a cache memory can be integral with the processor device of the system (referred to as an L1 cache) or non-integral with the processor (referred to as an L2 cache).
Information may be copied from a portion of the system memory into the cache memory. The information in the cache memory may then be modified. Further, modified information from the cache memory can then be copied back to a portion of the system memory. Accordingly, it is important to map information in the cache memory relative to its location within system memory. Assuming selection of an appropriately sized cache memory and the efficient of data therein the limiting factor in cache performance is the speed of the cache memory and the ability of the system to rapidly write data into or read data from the cache memory.
A cache memory typically includes a plurality of memory sections, wherein each memory section stores a block or a “line” of two or more words of data wherein a line consists of eight “doublewords” (wherein each doubleword comprises eight 8-bit bytes). Each line has associated with it an address tag that uniquely identifies which line of system memory it is a copy of. When a read request originates in the processor for a new word (or a new doubleword or a new byte), whether it is data or instruction, an address tag comparison is made to determine whether a copy of the requested word resides in a line of the cache memory. If present, the data is used directly from the cache. This event is referred to as a cache read “hit”. If not present, a line containing the requested word is retrieved from system memory and stored in the cache memory. The requested word is simultaneously supplied to the processor. This event is referred to as a cache read “miss”.
In addition to using a cache memory to retrieve data, the processor may also write data directly to the cache memory instead of to the system memory. When the processor desires to write data to memory, an address tag comparison is made to determine whether the line into which data is to be written resides in the cache memory. If the line is present in the cache memory, the data is written directly into the line. This event is referred to as a cache write “hit”. If the line into which data is to be written does not exist in the cache memory, the line is either fetched into the cache memory from system memory to allow the data to be written into the cache, or the data is written directly into the system memory. This event is referred to as a cache write, “miss.”
In cache implementations of the present invention, it is useful to perform the address tag comparisons by access of directories having an effective address (EA) in parallel with getting a Real address (RA) to verify that there is a cache line “Hit”. For example, the cache of the present invention may have 2 or more data ports, each with an EA content addressable memory (ECAM) directory containing partial EA addresses and a RA content addressable memory (RCAM) directory containing RA addresses. If the ECAM access results in a Hit, the RCAM confirms the hit is true by comparing the RCAM with the RA. When performing data prefetching for the cache, a data cache controller uses a real address (RCAM) lookup cycle to check whether the prefetching line is a true hit or not. If it is a RCAM hit, the cache controller “kills” the operation. If it is a RCAM miss, the cache controller continues prefetching a line of data into the cache memory from system memory.
However, by way of limitation, using the RCAM lookup cycle for the prefetch test will use the RCAM path of one of the two data ports. Therefore it is a subject of the present invention that the data port data is really not needed by the prefetching operation. Since the prefetch access uses one data port, the cache controller has to block one of the two aforementioned EA ports so that only one is available for the critical load accesses to the cache during prefetching. As a result of this type of prior art prefetching, cache performance is affected significantly. To suspend prefetching until an EA port is free, degrades the prefetch performance.
In view of the above, it should be apparent that a method and system for improving memory pre-fetch performance to a data cache would be highly desirable
SUMMARY OF THE INVENTION
It is therefore one object of the present invention to provide an improved data process system.
It is another object of the present invention to provide a method to improve the memory pre-fetch performance as well as normal load performance of a microprocessor memory system.
It is yet another object of the present invention to provide an improved method and system for testing pre-fetch line residency in the cache without interfering with the cache data port accesses.
The foregoing objects are achieved as is now described. An interleaved data cache array which is divided into two subarrays is provided for utilization within a data processing system. Each subarray includes a plurality of cache lines wherein each cache line includes a selected block of data, a parity field, a content addressable field containing a portion of an effective address (ECAM) for the selected block of data, a second content addressable field contains a real address (RCAM) for the selected block of data and a data status field. Separate effective address ports (EA) and a real address port (RA) permit parallel access to the cache without conflict in separate subarrays and a subarray arbitration logic circuit is provided for attempted simultaneous access of a single subarray by both the effective address port (EA) and the real address port (RA). A normal word line is provided and activated by either the effective address port or the real address port through the subarray arbitration. An existing Real Address (RA) cache snoop port is used to check whether a pre-fetching stream's line access is a true cache hit or not. The snoop read access uses a (33-bit) real address to access the data cache without occupying a data port during testing of the pre-fetching stream hits. Therefore, the two Effective Address (EA) accesses and a RCAM snoop access can access the data cache simultaneously thereby increasing pre-fetching performance as well as normal load performance.
The above as well as additional objects, features, and advantages of the present invention will become apparent in the following detailed written description.


REFERENCES:
patent: 5228135 (1993-07-01), Ikumi
patent: 5353424 (1994-10-01), Partovi et al.
patent: 5426765 (1995-06-01), Stevens et al.
patent: 5446863 (1995-08-01), Stevens et al.
patent: 5537640 (1996-07-01), Pawlowski et al.
patent: 5557769 (1996-09-01), Bailey et al.
patent: 5619674 (1997-04-01), Ikumi
patent: 5640534 (1997-06-01), Liu et al.
patent: 5696935 (1997-12-01), Grochowski et al.
patent: 5752260 (1998-05-01), Liu
patent: 5761714 (1998-06-01), Liu et al.
patent: 5802567 (1998-09-01), Liu et al.
patent: 5996061 (1999-11-01), Lopez-Aguado et al.
patent: 0 708 404 A2 (1996-04-01), None
patent: 0 741 356 A1 (1996-11-01), None
patent: 1-288940 (1988-05-01), None
patent: 3-282948 (1990-03-01), None
European Search Report; EP 99 30 1537; Jul. 1, 1999.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Method and system for pre-fetch cache interrogation using... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Method and system for pre-fetch cache interrogation using..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method and system for pre-fetch cache interrogation using... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2505270

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.