Multiprocessor system bus with combined snoop responses...

Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C711S122000, C711S156000

Reexamination Certificate

active

06502171

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Technical Field
The present invention relates in general to alternatives to cancelled cast out operations in data processing systems and in particular to directing different allocation of cast out or target data. Still more particularly, the present invention relates to instructing, within a combined response to an operation involving a cast out, a horizontal storage device to allocate and store either the cast out data or the target data.
2. Description of the Related Art
High performance data processing systems typically include a number of levels of caching between the processor(s) and system memory to improve performance, reducing latency in data access operations. When utilized, multiple cache levels are typically employed in progressively larger sizes with a trade off to progressively longer access latencies. Smaller, faster caches are employed at levels within the storage hierarchy closer to the processor or processors, while larger, slower caches are employed at levels closer to system memory. Smaller amounts of data are maintained in upper cache levels, but may be accessed faster.
Withinsuch systems, when data access operations frequently give rise to a need to make space for the subject data. For example, when retrieving data from lower storage levels such as system memory or lower level caches, a cache may need to overwrite other data already within the cache because no further unused space is available for the retrieved data. A replacement policy—typically a least-recently-used (LRU) replacement policy—is employed to decide which cache location(s) should be utilized to store the new data.
Often the cache location (commonly referred to as a “victim”) to be overwritten contains only data which is invalid or otherwise unusable from the perspective of a memory coherency model being employed, or for which valid copies are concurrently stored in other devices within the system storage hierarchy. In such cases, the new data may be simply written to the cache location without regard to preserving the existing data at that location by deallocating the cache location and reallocating the same cache location for the new data.
At other times, however, the cache location selected to received the new data contains modified data, or data which is otherwise unique or special within the storage hierarchy. In such instances, the replacement of data within a selected cache location (a process often referred to as “updating” the cache) requires that any modified data associated with the cache location selected by the replacement policy be written back to lower levels of the storage hierarchy for preservation. The process of writing modified data from a victim to system memory or a lower cache level is generally called a cast out or eviction.
When a cache initiates a data access operation—for instance, in response to a cache miss for a READ operation originating with a processor—typically the cache will initiate a data access operation (READ or WRITE) on a bus coupling the cache to lower storage levels. If the replacement policy requires that a modified cache line be overwritten, compelling a cast out for coherency purposes, the cache will also initiate the cast out bus operation.
There are a number of circumstances in which an eviction or cast out may, from the perspective of global data storage management, be less preferable than other alternatives. For example, if the target of the data access is only going to be accessed once by the processor core requesting that cache line (e.g., the cache line contains instructions not affected by branching), there would be no benefit to casting out the existing cache line in order to make space for the requested cache line. Alternatively, where a cache from which the victim is being evicted is one or multiple caches in a given level of a storage hierarchy, each supporting modified or shared intervention, and a horizontal cache (one at the same level as the evicting cache) has an invalid or shared entry within the congruence class for the victim, available data storage may be more effectively employed by allowing the data access target or the cast out victim to replace the invalid or shared entry.
It would be desirable, therefore, to be able to cancel a cast out operation or portion of an operation in order to improve global data storage management. It would further be advantageous if cancelling the eviction did not significantly increase latency of data access operations.
SUMMARY OF THE INVENTION
It is therefore one object of the present invention to provide alternatives to cancelled cast out operations in data processing systems.
It is another object of the present invention to provide a mechanism for directing different allocation of cast out or target data as an alternative to cancelled cast out operations.
It is yet another object of the present invention to provide a mechanism for instructing, within a combined response to an operation involving a cast out, a horizontal storage device to allocate and store either the cast out data or the target data.
The foregoing objects are achieved as is now described. In cancelling the cast out portion of a combined operation including a data access related to the cast out, the combined response logic explicitly directs a horizontal storage device at the same level as the storage device initiating the combined operation to allocate and store either the cast out or target data. A horizontal storage device having available space—i.e., an invalid or modified data element in a congruence class for the victim—stores either the target or the cast out data for subsequent access by an intervention. Cancellation of the cast out thus defers any latency associated with writing the cast out victim to system memory while maximizing utilization of available storage with acceptable tradeoffs in data access latency.
The above as well as additional objects, features, and advantages of the present invention will become apparent in the following detailed written description.


REFERENCES:
patent: 4797814 (1989-01-01), Brenza
patent: 5251310 (1993-10-01), Smelser et al.
patent: 5369753 (1994-11-01), Tipley
patent: 5493668 (1996-02-01), Elko et al.
patent: 5526510 (1996-06-01), Akkary et al.
patent: 5537575 (1996-07-01), Foley et al.
patent: 5564035 (1996-10-01), Lai
patent: 5636355 (1997-06-01), Ramakrishnan et al.
patent: 5687350 (1997-11-01), Bucher et al.
patent: 5829038 (1998-10-01), Merrell et al.
patent: 5829040 (1998-10-01), Son
patent: 5895495 (1999-04-01), Arimilli et al.
patent: 5900011 (1999-05-01), Saulsbury et al.
patent: 5946709 (1999-08-01), Arimilli et al.
patent: 5966729 (1999-10-01), Phelps
patent: 6018791 (2000-01-01), Arimilli et al.
patent: 6021468 (2000-02-01), Arimilli et al.
patent: 6023747 (2000-02-01), Dodson
patent: 6029204 (2000-02-01), Arimilli et al.
patent: 6038645 (2000-03-01), Nanda et al.
patent: 6058456 (2000-05-01), Arimilli et al.
patent: 6128702 (2000-10-01), Saulsbury et al.
patent: 6195729 (2001-02-01), Arimilli et al.
patent: 6275909 (2001-08-01), Arimilli et al.
Texas Instruments Incorporated, TMS32010 User's Guide, 1983, 3 pages.
Lebeck, A. R., Sohi, G. S.;Request Combining in Multiprocessors with Arbitrary Interconnection Networks, IEEE Digital Library, vol. 5, Issue 11, Nov. 1994.
Handy, Jim;The Cache Memory Book; Academic Press, Inc.; 1993; pp. 77-82.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Multiprocessor system bus with combined snoop responses... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Multiprocessor system bus with combined snoop responses..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Multiprocessor system bus with combined snoop responses... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2966005

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.