Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories
Reexamination Certificate
2002-01-07
2003-10-21
Sparks, Donald (Department: 2187)
Electrical computers and digital processing systems: memory
Storage accessing and control
Hierarchical memories
C707S793000, C707S793000, C707S793000, C709S241000, C709S213000, C711S151000, C711S152000
Reexamination Certificate
active
06636949
ABSTRACT:
FIELD OF INVENTION
The present invention relates generally to the design of cache memories in computer central processor units (CPU's), and particularly to the detection and resolution of coherence protocol races within a chip multiprocessor node (i.e., a chip with multiple CPU's).
BACKGROUND OF THE INVENTION
In chip multiprocessor systems, it is advantageous to split the coherence protocol into two cooperating protocols implemented by different hardware modules. One protocol is responsible for cache coherence management within the chip, and is typically implemented by the second-level cache controller (“cache controller”). The other protocol is responsible for cache coherence management across chip multiprocessor nodes (“nodes”), and is typically implemented by separate cache coherence protocol engines (“protocol engines”). The cache controller and the protocol engine need to communicate and synchronize memory transactions involving multiple nodes. In particular, there must be a single serialization point within each node that resolves races within the node. Specifically, the serialization point must address situations in which the protocol engine and the cache controller overlap in their respective processing of memory transactions concerning the same memory line of information.
SUMMARY OF THE INVENTION
This invention relates to the design of cache coherence protocol for a scalable shared memory system composed of chip multiprocessor nodes, that is, each processor chip contains multiple CPUs, each CPU with its own private instruction and data caches (first-level caches) and all CPUs sharing a single second-level cache. Cache coherence is maintained among all caches within a chip, as well among all caches across the nodes by a protocol engine and a cache controller that are included in each node of the system. The protocol engine and the cache controller often interact to complete each of these tasks. If messages exchanged between the protocol engine and the cache controller concerning a particular cache line overlap, the protocol engine requests additional processing instructions from the cache controller and stall action on the message received from the cache controller until after receiving the additional processing instructions from the cache controller. The protocol engine is further configured to stall action on messages concerning the same cache line and received from other nodes until after receiving the processing instructions from the cache controller.
REFERENCES:
patent: 5892970 (1999-04-01), Hagersten
patent: 6055605 (2000-04-01), Sharma et al.
patent: 6457100 (2002-09-01), Ignatowski et al.
patent: 2001/0052054 (2001-12-01), Franke et al.
patent: 2002/0178210 (2002-11-01), Khare et al.
patent: 2003/0079085 (2003-04-01), Ang
Barroso Luiz A.
Gharachorloo Kourosh
Nowatzyk Andreas
Ravishankar Mosur K.
Stets Robert J.
Chace C. P.
Hewlett--Packard Development Company, L.P.
Sparks Donald
LandOfFree
System for handling coherence protocol races in a scalable... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with System for handling coherence protocol races in a scalable..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and System for handling coherence protocol races in a scalable... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3129166