Cache system capable of keeping cache-coherency among...

Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C711S140000, C711S141000, C711S150000

Reexamination Certificate

active

06173370

ABSTRACT:

BACKGROUND OF THE INVENTION
This invention relates to a cache system and, in particular, to a cache system formed on a plurality of buses connected via a bus bridge and hierarchically arranged.
In order to expand a shared-memory multiprocessor system, it has been proposed to connect a plurality of system buses via a bus bridge. A number of such multiprocessor system with an improved expandibility are known in the art.
For example, Japanese Unexamined Patent Publication (JP-A) No. 297642/1996 discloses a shared-memory multiprocessor system in which two system buses are connected via a sort of a bus bridge called a directory. The publication also discloses a technique to guarantee cache coherency over store-in-caches.
In the above-mentioned system, the two system buses connected via the directory have bus cycles synchronized with each other with the offset of a half cycle. Upon detection of competing requests from these buses to the same request address, the request later issued is canceled while the preceding request is preferentially transferred to the system bus. Thus, the cache coherency is maintained upon occurrence of competing write requests between the system buses.
For read requests, data are essentially acquired from a main memory except when “dirty” data incoincident with data in the main memory are acquired from another cache. Therefore, even if “clean” data coincident with the data in the main memory are present in the same bus, the data must be acquired via the bus bridge from the main memory connected to the other system bus.
On the other hand, Japanese Unexamined Patent Publication (JP-A) No. 110844/1994 discloses a distributed shared memory multiprocessor system in which an internal bus connected to a CPU cache memory and a main memory is connected to a shared bus via a bus bridge called a sharing control section.
The sharing control section comprises a cache state tag memory memorizing the state of the cache memory connected thereto. Upon executing a write operation, the sharing control section refers to the content of the tag memory. If a data block is in a shared state, an invalidate instruction is delivered through the shared bus to other sharing control sections. Thus, the cache coherency is maintained.
The above-mentioned conventional cache systems are disadvantageous in the following respects.
As a first disadvantage, it is impossible to avoid write confliction if the multiprocessor system includes three or more system buses connected to one another.
Specifically, in the shared memory multiprocessor system disclosed in Japanese Unexamined Patent Publication (JP-A) No. 297642/1996, it is impossible to guarantee the cache coherency when store operations are simultaneously performed in caches connected to the system buses at opposite ends among the three system buses connected to one another.
In the distributed shared memory multiprocessor system disclosed in Japanese Unexamined Patent Publication (JP-A) No. 110844/1994, reference is made to the cache state tag memory in the sharing control section upon executing the write operation and the invalidate instruction is sent to the shared bus depending on the content of the cache state tag memory. However, no disclosure is made about how to maintain the cache coherency upon occurrence of competing invalidate instructions on the shared bus.
As a second disadvantage, the data must be acquired from the main memory in case where the clean data coincident with the data in the main memory is to be acquired. This will adversely affect the performance.
If a cache holding the clean data is present at a location nearer from a data requesting source than the main memory, the above-mentioned disadvantage can be removed by acquiring the data from the cache. In this event, however, another problem will arise. Specifically, the bus bridge must have an exact copy of a cache tag. Alternatively, a read request delivered on one system bus must not be forwarded to the other system bus until a result of lookup of the cache is given.
SUMMARY OF THE INVENTION
It is an object of this invention to provide a cache system comprising three or more system buses connected via bus bridges and store-in-caches connected to the system buses, which is capable of avoiding store confliction between those buses hierarchically remote from each other to maintain cache coherency in the total system and which is capable of quickly loading the caches with data coincident with data in a main memory.
A cache system according to this invention is as follows:
(1) A cache system comprising a single global bus, a plurality of central processing units connected to the global bus, and a main memory unit connected to the global bus, each of the central processing units comprising a local bus, a plurality of store-in-caches, and a bus bridge connected to the local bus and the global bus for controlling, by monitoring cache tags representative of states of the store-in-caches of each central processing unit, a request delivered from one of the store-in-caches of each central processing unit to the local bus of each central processing unit to avoid store-confliction due to the request and a different request delivered to the global bus from one of the store-in-caches of a different central processing unit of the central processing units through the bus bridge of the different central processing unit and to thereby keep cache-coherency among the store-in-caches of the central processing units.
(2) A cache system as mentioned in Paragraph (1), wherein: the bus bridge of each of the central processing units comprises a cache tag memory for storing a copy of the cache tags representative of the states of the store-in-caches; the bus bridge of each of the central processing units controlling, with reference to the copy of the cache tags of the cache tag memory thereof, the request so as to avoid the store-confliction due to the request and the different request.
(3) A cache system as mentioned in Paragraph (2), wherein the bus bridge of each of the central processing units further comprises: a request copy buffer for temporarily holding, as a held request, the request received from the local bus; a global bus command buffer for holding a command which is delivered to the global bus; a local bus command buffer for holding a difference command which is delivered to the local bus; and a bus bridge control circuit responsive to the held request and the different request for controlling the global bus command buffer and the local bus command buffer with reference to the cache tag memory to make the global bus command buffer and the local bus command buffer deliver the command and the different command to the global bus and the local bus as optimum commands so as to avoid the store-confliction due to the request and the different request.
(4) A cache system as mentioned in Paragraph (2), wherein: each store-in-cache of each of the central processing units notifies a result of lookup of the cache tag representative of the state of each store-in-cache to other store-in-caches of each of the central processing units and to the cache tag memory of the bus bridge of each of the central processing units at a particular timing of a succeeding bus cycle which succeeds a bus cycle at which a block read request is delivered to the local bus as the request.
(5) A cache system as mentioned in Paragraph (4), wherein: the bus bridge of each of the central processing units determines, with reference to the cache tag memory thereof, one of the store-in-caches having reply-data for the block read request as a replier which carries out a reply due to the reply-data for the block read request.
(6) A cache system as mentioned in Paragraph (5), wherein: the bus bridge of each of the central processing units controls the block read request to deliver the block read request through the global bus to the main memory unit when no store-in-cache having the reply-data for the block read request is present.
(7) A cache system as mentioned in Paragraph (6), wherein: the bus bridge of each of the central

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Cache system capable of keeping cache-coherency among... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Cache system capable of keeping cache-coherency among..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Cache system capable of keeping cache-coherency among... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2472452

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.