High performance cache intervention mechanism for symmetric...

Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C711S145000, C711S144000, C711S122000

Reexamination Certificate

active

06763433

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Technical Field
The present invention generally relates to an improved data processing system and in particular to improved memory management in a data processing system. Still more particularly, the present invention relates to an improved intervention protocol for cache memory management in a data processing system.
2. Description of the Related Art
Multiprocessor systems having multilevel storage hierarchies often support an “intervention”, a bus transaction in which a snooper responds to a request for data and sources the data rather than allowing the data to be sourced from the storage device to which the request was addressed. For example, if one level two (L2) cache snoops a read operation initiated by another L2 cache on the system bus directed at system memory, the first L2 cache may intervene in the read operation through a snoop response. The data is then sourced from the snooping cache to the requesting cache.
In a typical intervention scenario, a cache issues a read request on the system bus. Normally, the requested data would be sourced from main memory. With intervention, another cache containing the data may respond and source the data instead of the system memory. Upon seeing this response, the memory controller knows not to source the data, which is instead sourced directly by the intervening cache to the requesting cache via the system bus.
The most commonly supported intervention type is a modified intervention, where “modified” refers to a coherency state within the modified/exclusive/shared/invalid (MESI) coherency protocol. If the first L2 cache described above snoops the read operation and determines that it contains the target cache line in a modified coherency state, the cache will intervene in the snooped operation to satisfy the request and to update the image of the data in system memory, maintaining memory coherency.
Some systems also support a shared intervention, in which the snooping L2 cache has the requested data in a shared coherency state but intervenes and satisfies the request. Typically shared intervention is supported where access latency to system memory is much longer (in processor or bus cycles) than the time required for request/response transactions on the system bus.
An intervention usually returns a full cache line (which may be, for example, 64 bytes) of data. Assuming the system data bus is eight bytes wide, eight bus cycles (or eight “beats”) are required to transfer the cache line. However, the requesting cache may only require a portion of the cache line, not the entire cache line, and may indicate this through an intra-cache line address portion of the address driven for the request. Thus, the bus cycles consumed in transferring the portions of the cache line which are not required by the requesting cache are effectively wasted if the remaining portion of the cache line data is unlikely to be required in the near future (before invalidation of the cache line within the requesting cache).
In some situations, an intervening cache may desire to have the requesting cache skip caching of the target data. For example, the intervening cache may predict that it will be modifying the data again shortly, and wish to avoid having to transmit a request to invalidate copies of the data within other caches (i.e., maintaining the cache line in an exclusive state after the intervention).
It would be desirable, therefore, to provide a system improving the “intelligence” of cache management, and in particular to reducing bus bandwidth consumed by interventions and subsequent related operations.
SUMMARY OF THE INVENTION
It is therefore one object of the present invention to provide an improved data processing system.
It is another object of the present invention to provide improved memory management in a data processing system.
It is yet another object of the present invention to provide to an improved intervention protocol for cache memory management in a data processing system.
The foregoing objects are achieved as is now described. Upon snooping an operation in which an intervention is permitted or required, an intervening cache may elect to source only that portion of a requested cache line which is actually required, rather than the entire cache line. For example, if the intervening cache determines that the requesting cache would likely be required to invalidate the cache line soon after receipt, less than the full cache line may be sourced to the requesting cache. The requesting cache will not cache less than a full cache line, but may forward the received data to the processor supported by the requesting cache. Data bus bandwidth utilization may therefore be reduced. Additionally, the need to subsequently invalidate the cache line within the requesting cache is avoided, together with the possibility that the requesting cache will retry an operation requiring invalidation of the cache line.
The above as well as additional objectives, features, and advantages of the present invention will become apparent in the following detailed written description.


REFERENCES:
patent: 5335335 (1994-08-01), Jackson et al.
patent: 5355467 (1994-10-01), MacWilliams et al.
patent: 5369753 (1994-11-01), Tipley
patent: 5737759 (1998-04-01), Merchant
patent: 5784590 (1998-07-01), Cohen et al.
patent: 5809533 (1998-09-01), Tran et al.
patent: 5890200 (1999-03-01), Merchant
patent: 5909699 (1999-06-01), Sarangdhar et al.
patent: 5987571 (1999-11-01), Shibata et al.
patent: 5995967 (1999-11-01), Iacobovici et al.
patent: 6052760 (2000-04-01), Bauman et al.
patent: 6138217 (2000-10-01), Hamaguchi
patent: 6230260 (2001-05-01), Luick
patent: 6282615 (2001-08-01), Arimilli et al.
patent: 6470437 (2002-10-01), Lyon
patent: 6499085 (2002-12-01), Bogin et al.
patent: 6681293 (2004-01-01), Solomon et al.
Micro Channel Data Streaming and Input/Output Snooping Facility for Personal Computer Systems, IBM Technical Disclosure Bulletin, vol. 36, Issue 10, pp. 187-193, Oct. 1993.*
Cache Interrogation with Partial Address Directory, IBM Technical Disclosure Bulletin, V ol. 7, Issue 7, pp. 343-344, Jul. 1993.*
“Micro Channel Data Streaming and Input/Output Snooping Facility for Personal Computer Systems”, IBM Technical Disclosure Bulletin, vol. 36, Issue 10, pp. 187-192, Oct. 1993.*
“Selective Invalidation Scheme for Software MP Cache Coherence Control”, IBM Technical Disclosure Bulletin, vol. 35, Issue 3, pp. 244-246, Aug. 1992.*
“Processor Performance Monitoring With A Depiction Of The Efficiency Of The Cache Coherency Protocol Of Superscalar Microprocessor In An Symmetric Multiple Processor Environment”; IBM TDB, vol. 40, No. 1, Jan. 1997, pp. 79-81 XP000686109.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

High performance cache intervention mechanism for symmetric... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with High performance cache intervention mechanism for symmetric..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and High performance cache intervention mechanism for symmetric... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3249748

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.