High performance data processing system via cache...

Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C711S144000, C711S145000

Reexamination Certificate

active

06721853

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Technical Field
The present invention relates to data processing systems, and particularly to processors operating in nodes of multiway multiprocessor links. More specifically, the present invention relates to improving the performance of such data processing systems during flushes of cache memory in remote nodes of data obtained from memory in a home node on the link.
2. Description of the Related Art
A widely used high performance data processing system is a multiway multiprocessor link with several nodes. During operation of such a prior art data processing system, system memory for the overall data processing system was typically partitioned among memory components of the several nodes. It was thus common for cache memory in one node, called a remote node, to access/cache information resident in the memory of another node, termed a home node, for processing.
A memory directory in the home node kept record of the transfer of that information to the cache memory in the remote node. During data processing in the remote node, the transferred information in the cache memory of the remote node would periodically be designated as a victim and flushed from that cache, based on lack of recent usage or other reasons. The system memory in the home node of prior art data processing systems would at some subsequent time also perform a home memory address flush directed towards the transferred information in the remote node cache. This required transfers of requests and flush commands over the system links, being in effect what is known as a mainstream operation. In addition, it was often the case that the remote node cache memory had actually been flushed in the remote node some time before, making the home memory address flush a redundant operation.
This form of cache memory flush had undesirable effects, reducing system speed and performance and increasing system latency. This was undesirable in high performance data processing systems. It would, therefore, be desirable to reduce system latency in multiway multiprocessor links. It would also be desirable to have cache maintenance purges in multiway multiprocessor links be done on a basis that required less usage of the system links.
SUMMARY OF THE INVENTION
It is therefore an object of the invention to provide a method and system for high performance data processing in multiway multiprocessor links for cache maintenance purges with reduced usage of system links.
It is another object of the invention to provide a method and system for high performance data processing with reduced home memory address flushes to remote nodes in multiprocessor links.
It is still another object of the invention to provide a method and system for high performance data processing with reduced system latency by removing unnecessary memory purges from transmission over system links.
The above and other objects are achieved as is now described. A high performance data processing system and method are provided which improve operation of a multinode processor system by providing protocols for organized purges of cache memory in remote nodes when the cache memory is selected as a victim for purging. When a cache associated in a remote node (e.g. L
2
cache) of the system identified as a victim is purged, its cache controller sends a cache deallocate address transaction over the system bus of that remote node. An inclusivity indicator for the associated cache is also provided in the L
3
cache directory on the system bus for that remote node. The inclusivity indicator for the additional cache contains bits representing the valid/invalid status of each cache line in the associated cache on the system bus in the remote node. The inclusivity indicator changes state for the associated cache having its memory purged. An L
3
cache directory in the node snoops the system bus for cache deallocate address transactions from other cache controllers on the node. The remote node notifies the home node of a cache deallocate address transaction when all cache memories of that remote node are indicated invalid. An inclusivity indicator in the remote L
3
cache directory of the remote node changes state in response to such a notification. In addition, the home node maintains a system memory directory which consists of inclusivity bits that track which remote nodes have lines checked out from this home nodes system memory. The home node updates the inclusivity bits in its system memory directory when it receives a cache deallocate address transaction from the remote node. Performance of cache line maintenance functions over system links in the multinode system are thus substantially reduced.
The foregoing and other objects and advantages of the present invention will be apparent to those skilled in the art, in view of the following detailed description of the preferred embodiment of the present invention, taken in conjunction with the appended claims and the accompanying drawings.
The above as well as additional objectives, features, and advantages of the present invention will become apparent in the following detailed written description.


REFERENCES:
patent: 5325504 (1994-06-01), Tipley et al.
patent: 5727150 (1998-03-01), Laudon et al.
patent: 5737565 (1998-04-01), Mayfield
patent: 5893149 (1999-04-01), Hagersten et al.
patent: 6195728 (2001-02-01), Bordaz et al.
patent: 6349366 (2002-02-01), Razdan et al.
patent: 6374329 (2002-04-01), McKinney et al.
patent: 6397302 (2002-05-01), Razdan et al.
patent: 6493801 (2002-12-01), Steely, Jr. et al.
patent: 6633959 (2003-10-01), Arimilli et al.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

High performance data processing system via cache... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with High performance data processing system via cache..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and High performance data processing system via cache... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3258350

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.