Multiprocessor system bus with cache state and LRU snoop...

Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C711S144000, C711S145000, C711S146000, C711S121000, C711S122000, C711S136000

Reexamination Certificate

active

06343347

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Technical Field
The present invention relates in general to system responses to bus operations in data processing systems and in particular to system responses to related data access and cast out or deallocate operations. Still more particularly, the present invention relates to merged snoop responses to related data access and cast out or deallocate operations which include cache state, LRU position, and storage availability information for the snooper.
2. Description of the Related Art
High performance data processing systems typically include a number of levels of caching between the processor(s) and system memory to improve performance, reducing latency in data access operations. When utilized, multiple cache levels are typically employed in progressively larger sizes with a trade off to progressively longer access latencies. Smaller, faster caches are employed at levels within the storage hierarchy closer to the processor or processors, while larger, slower caches are employed at levels closer to system memory. Smaller amounts of data are maintained in upper cache levels, but may be accessed faster.
Within such systems, when data access operations frequently give rise to a need to make space for the subject data. For example, when retrieving data from lower storage levels such as system memory or lower level caches, a cache may need to overwrite other data already within the cache because no further unused space is available for the retrieved data. A replacement policy—typically a least-recently-used (LRU) replacement policy—is employed to decide which cache location(s) should be utilized to store the new data.
Often the cache location (commonly referred to as a “victim”) to be overwritten contains only data which is invalid or otherwise unusable from the perspective of a memory coherency model being employed, or for which valid copies are concurrently stored in other devices within the system storage hierarchy. In such cases, the new data may be simply written to the cache location without regard to preserving the existing data at that location.
At other times, however, the cache location selected to received the new data contains modified data, or data which is otherwise unique or special within the storage hierarchy. In such instances, the replacement of data within a selected cache location (a process often referred to as “updating” the cache) requires that any modified data associated with the cache location selected by the replacement policy be written back to lower levels of the storage hierarchy for preservation. The process of writing modified data from a victim to system memory or a lower cache level is generally called a cast out or eviction.
When a cache initiates a data access operation—for instance, in response to a cache miss for a READ operation originating with a processor—typically the cache will initiate a data access operation (READ or WRITE) on a bus coupling the cache to lower storage levels. If the replacement policy requires that a modified cache line be overwritten, compelling a cast out for coherency purposes, the cache will also initiate the cast out bus operation.
Even when the selected victim contains data which is neither unique nor special within the storage hierarchy (i.e. invalid data), an operation to lower levels of the storage hierarchy may still be required. For instance, the cache organization may be “inclusive,” meaning that logically vertical in-line caches contain a common data set. “Precise” inclusivity requires that lower level caches include at least all cache lines contained within a vertically in-line, higher level cache, although the lower level cache may include additional cache lines as well. Imprecise or “pseudo-precise” inclusivity relaxes this requirement, but still seeks to have as much of the data within the higher level cache copied within the lower level cache as possible within constraints imposed by bandwidth utilization tradeoffs. Within an inclusive, hierarchical cache system, even if the cache line to be replaced is in a coherency state (e.g., “shared”) indicating that the data may be simple discarded without writing it to lower level storage, an operation to the lower level storage may be required to update inclusivity information. The storage device within which the cache line is to be overwritten (or “deallocated” and replaced) initiates an operation notifying lower level, in-line storage devices of the deallocation, so that the lower level devices may update internal inclusivity information associated with the cache line. This requires an operation in addition to the data access operation necessitating replacement of the cache line.
In addition to the data access and cast out/deallocate bus operations, snoop responses from horizontal storage devices (those at the same level within the storage hierarchy as the storage device initiating the data access and cast out/deallocate operations) are driven separately on the system bus. Furthermore, the snoop response is typically limited to only a coded response to the initiated data access or cast out/deallocate bus operation—that is, a null response indicating that the operation may proceed, a retry indicating that the operation should be deferred until the snooper completes an operation, an intervention indicating that the snooper will source requested data, or the like. Snoop responses conventionally allow an operation to proceed, stop the operation, or redirect the operation, but do not provide any additional information allowing the combined response logic or the storage device initiating the bus operation to intelligently react to the snoop response.
It would be desirable, therefore, to reduce the number of system bus responses required for related data access and cast out or deallocate bus operations. It would further be advantageous to improve the information driven within the snoop responses for improved storage management.
SUMMARY OF THE INVENTION
It is therefore one object of the present invention to provide improved system responses to bus operations in data processing systems.
It is another object of the present invention to provide improved system responses to related data access and cast out or deallocate operations in data processing systems.
It is yet another object of the present invention to provide merged snoop responses to related data access and cast out or deallocate operations which include cache state, LRU position, and storage availability information for the snooper.
The foregoing objects are achieved as is now described. Upon snooping a combined data access and cast out/deallocate operation initiating by a horizontal storage device, snoop logic determines appropriate responses to both the data access and the cast out/deallocate based upon the presence and coherency state of the target of the data access within a corresponding storage device, the presence and coherency state of the victim of the cast out/deallocate within the corresponding storage device, and the presence of an invalid entry within the corresponding storage device in a congruence class including both the target and the victim. The appropriate responses are “merged”, transmitted together in response to the combined operation as either a single response code or discrete response codes within a single response. The coherency state and LRU position of the selected victim for the cast out/deallocate portion of the combined operation may also be appended to the response to facilitate data storage management.
The above as well as additional objects, features, and advantages of the present invention will become apparent in the following detailed written description.


REFERENCES:
patent: 4797814 (1989-01-01), Brenza
patent: 5369753 (1994-11-01), Tipley
patent: 5493668 (1996-02-01), Elko et al.
patent: 5564035 (1996-10-01), Lai
patent: 5636355 (1997-06-01), Ramakrishnan et al.
patent: 5829038 (1998-10-01), Merrell et al.
patent: 5829040 (1998-10-01), Son
patent: 5895495 (1999-04-01), Arimilli et al.
patent: 5946709 (1999-08-01), Arimilli et al.
patent

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Multiprocessor system bus with cache state and LRU snoop... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Multiprocessor system bus with cache state and LRU snoop..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Multiprocessor system bus with cache state and LRU snoop... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2847117

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.