Pipelined non-blocking level two cache system with inherent...

Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C123S1690EB, C123S16900V

Reexamination Certificate

active

06519682

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates to a cache subsystem in a data processing system that resolves conflicts which may arise due to the interaction of central processing unit (CPU) read transactions, CPU write transactions, and line-fill transactions in a two (2) level non-blocking cache system. Due to the non-blocking nature of such caches more than one such transaction can be in progress at the same time. This can give rise to read-miss/write-miss and read-miss/read-miss conflicts.
More particularly, when a CPU write transaction is initiated shortly before or after a CPU read transaction, both can be active in the cache subsystem at the same time. If both transactions miss the level one (L1) cache and hit the level (L2) cache, and each of them uses the same line in memory, it is possible for the following sequence of events to occur. First, the CPU read transaction is initially processed by the L1 cache and since the read misses the L1, an L1 line-fill operation is initiated to transfer a line of data from the L2 cache to the L1 cache. Second, the CPU write transaction is then processed by the L1 cache, and since the line of data (same as that for the previous read) in which it resides is still not in the L1 cache, it misses the L1 cache and is passed on to the L2 cache. Third, the L1 line-fill transaction (generated by the L1 read-miss) places the requested line from the L2 cache into the L1 cache. Fourth, the write transaction then updates the addressed line in the L2 cache, after the old version of that same line has been placed in the L1 cache by the previous L1 line-fill.
It can be seen that this scenario would result in the line in the L1 cache not being updated with data from the most recent write transaction, such that if data from that line was later requested from the L1 cache via a CPU read transaction it would result in stale data being returned to the CPU.
Further, when a CPU read transaction is initiated shortly before or after another CPU read transaction, both can be active in the cache system at the same time. If both transactions miss the L1 cache and hit the L2 cache, and each of them uses the same memory address, then they will both generate an L1 line-fill transaction that attempts to load the same line from the L2 cache into the L1 cache. This can cause harmful effects if the L1 cache is a multi-way cache, which would allow the same line to be loaded into two different locations in the L1 cache, giving rise to coherency problems if the CPU later attempts to modify that line via a CPU write cycle. Since the L1 cache is normally incapable of modifying two copies of the same line simultaneously, one copy would end up containing stale data that could be returned to the CPU on a subsequent CPU read.
2. Description of Related Art
Conventional systems do not address the problems described herein in the same manner. Other systems typically use explicit conflict detection techniques, such as the addition of comparators to determine when read/write (R/W) transactions are attempting to use the same address in the memory system (caches and/or main memory).
For CPU write transactions in a typical cache system, the L1 cache is only accessed before the L2 cache, not after. If the L1 cache is hit then there is no problem, but if the transaction misses the L1 cache then there is the possibility of an L1 read-miss/write-miss conflict, as described above. To avoid this, the address of each new CPU write transaction is explicitly checked (compared) against the address of any L1 line-fill transaction (caused by the L1 read-miss) in progress to ensure that there is no conflict before allowing the write-transaction to proceed. If there is a conflict, then the CPU write transaction is stalled until the L1 line-fill is completed and the conflict thus resolved.
Similarly, to avoid L1 read-miss/read-miss conflicts in conventional systems, the address of each new CPU read transaction is explicitly checked (compared) against the address of any L1 line-fill transaction (caused by a previous L1 read-miss) in progress to ensure that it does not conflict before allowing the read transaction to proceed. If it does conflict, then the CPU read transaction is either stalled until the line-fill is completed and the conflict thus resolved (the second CPU read will then get an L1 hit when allowed to proceed), or is marked as invalid so that it will not generate another L1 line-fill.
Therefore, it can be seen that a need exists for a data processing system having a cache subsystem (CSS) that is structured to avoid collisions between read/write transactions issued by the central processing unit independent of the order, or sequence in which they are issued. Further, a system that provides the previously described functions without the need for additional complexity, such as comparator circuits, or other logic would be highly desirable.
SUMMARY OF THE INVENTION
In contrast to the prior art, the present invention provides a five (5) stage memory access transaction pipeline which sequentially places the L1 cache after the L2 cache.
Broadly, the present invention is a cache subsystem in a data processing system structured to place the L1 cache RAMs after the L2 cache RAMs in the pipeline for processing both CPU write transactions and L1 line-fill transactions. In this manner it is naturally guaranteed that lines loaded into the L1 cache are updated by all CPU write transactions without having to perform any explicit checks. The present invention also places the L1 tag RAM before the L1 data RAM for both CPU write transactions and L1 line-fill transactions, such that CPU write transactions may check that a line is in the L1 cache before updating it, and L1 line-fill transactions may check that the line to be transferred from the L2 cache to the L1 cache is not already in the L1 cache.


REFERENCES:
patent: 5214765 (1993-05-01), Jensen
patent: 5285323 (1994-02-01), Hetherington et al.
patent: 5317718 (1994-05-01), Jouppi
patent: 5377341 (1994-12-01), Kaneko et al.
patent: 5642494 (1997-06-01), Wang et al.
patent: 5644752 (1997-07-01), Cohen et al.
patent: 5692151 (1997-11-01), Cheong et al.
patent: 5692152 (1997-11-01), Cohen et al.
patent: 5721855 (1998-02-01), Hinton et al.
patent: 5768610 (1998-06-01), Pflum
patent: 5930819 (1999-07-01), Hetherington et al.
patent: 5958047 (1999-09-01), Panwar et al.
patent: 5987594 (1999-11-01), Panwar et al.
patent: 6000013 (1999-12-01), Lau et al.
patent: 6052775 (2000-04-01), Panwar et al.
patent: 6081873 (2000-06-01), Hetherington et al.
patent: 6145054 (2000-11-01), Mehrotra et al.
patent: 6216234 (2001-04-01), Sager et al.
patent: 6226713 (2001-05-01), Mehrotra
patent: 6247097 (2001-06-01), Sinharoy
patent: 6269426 (2001-07-01), Hetherington et al.
patent: 0 258 559 (1988-09-01), None
D.W. Clark, B.W. Lampson, and K.A. Pier, “The Memory System of a High-Performance Personal Computer,” IEEE Transactions on Computers, vol. C-30, No. 10, Oct. 1981, pp. 715-733.
J.H. Edmondson, P. Rubinfeld, R. Preston and V. Rajagopalan, “Superscalar Instruction Execution in the 21164 Alpha Microprocessor,” IEEE Micro, vol. 15, No. 2, Apr. 1995, pp. 33-43.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Pipelined non-blocking level two cache system with inherent... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Pipelined non-blocking level two cache system with inherent..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Pipelined non-blocking level two cache system with inherent... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3173807

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.