Contingent response apparatus and method for maintaining...

Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C711S144000, C711S145000, C711S156000, C711S169000

Reexamination Certificate

active

06272604

ABSTRACT:

TECHNICAL FIELD OF THE INVENTION
This invention relates to data processing systems which include two or more processor devices sharing an address bus. More particularly, the invention includes an apparatus and method for coordinating the use of cache memory blocks by the different processors.
BACKGROUND OF THE INVENTION
Data processing systems and particularly microprocessor devices may include multiple processors which share system address and data buses. Each processor in such a multiple processor system commonly includes its own cache memory. Although each processor may include separate cache memory, each processor in the system may be allowed to address any particular line in cache memory, even a line of data currently stored at a cache location in another processor. Multiple processor systems which allow the various processors to address any cache location in the system must also include some arrangement for coordinating the use of cache memory to maintain “cache coherency” in the system. As used in this disclosure, “cache coherency” means generally the control of various cache memory locations necessary to facilitate proper system operation.
Processor systems which require high address bus throughput typically “pipeline” address bus operations. In these pipelined address buses, operations from the various processors are processed or held in a series of pipeline stages. Each pipeline stage requires one address bus clock cycle, and a different address operation is processed at each different pipeline stage during each given period. The number of address bus clock cycles it takes for an address operation to be processed through the pipelined address bus may be referred to as the address tenure on the bus.
In multiple processor systems which utilize a shared address bus, only a single address operation from one of the processors may enter the address bus pipeline in any given clock cycle. An address bus arbitration arrangement selects which particular processor may drive an address operation into the first stage of the pipelined address bus in a given clock cycle. Since the address bus is shared, that is, connected to each processor, each processor which is not selected by the address bus arbitration arrangement receives or “sees” the address operation which enters the pipeline address bus from a different processor. These receiving processors are said to “snoop” the operation entering the address bus pipeline from another processor. Both the address specified in an operation entering the address bus pipeline and other information such as an operation type may be snooped by the other processors sharing the address bus. The operation snooped on a shared address bus is commonly referred to as a snoop operation or query. The address and operation type specified in a snoop operation may be referred to as a snoop address and a snoop type, respectively.
Cache coherency in a multiple processor system is maintained by the processor which “owns” the data at a particular address. Ownership is defined according to a suitable protocol under which a system is designed to operate. The protocol determines how a first processor responds to a conflicting operation from another processor. A “conflicting operation” in this sense refers to an operation specifying the same address owned by another processor. According to one protocol, when a first processor “owns” data at a particular address and snoops a conflicting operation from a second processor, the first processor transmits a retry snoop response to the second processor. This retry snoop response lets the second processor know that it may not have the data at the specified location at that time. Multiple processor systems are designed such that each processor placing an address operation on the pipelined address bus in a given clock cycle will receive a snoop response to the action within a given number of address bus clock cycles. The number of clock cycles in which a snoop response will be received is referred to as the “snoop response window.”
For some operations, ownership of a particular cache block is declared after the operation completes the snoop response window without receiving a retry snoop response from another processor. However, ownership is not claimed during the address tenure itself. That is, ownership of the specified cache block is not claimed between the address bus clock cycle in which the address operation enters the address pipeline and the clock cycle in which the address operation finishes the pipeline.
Since a first processor does not have ownership of a cache block during the address tenure of certain types of operations that the processor may issue, the first processor does not recognize immediately if it should issue a retry snoop response to a conflicting address operation from another processor. It is only after the first processor passes its own snoop response window without receiving a retry response that the first processor knows with certainty that it has obtained ownership of the cache block and thus that it should transmit a retry snoop response to the processor issuing the younger conflicting address bus operation.
This uncertainty during the address tenure of an operation presents a problem as to the appropriate response to younger conflicting address bus operations which are snooped from the shared address bus. Simply retrying each younger conflicting address bus operation would result in unnecessarily retried operations since the processor prompting the retry response might not actually obtain ownership of the address. On the other hand, dynamically calculating the appropriate snoop response after address bus tenure could slow system throughput and would require substantial resources in terms of registers and logic elements.
SUMMARY OF THE INVENTION
It is an object of the invention to provide an apparatus and method for maintaining cache coherency in a data processing system having multiple processors which share a pipelined address bus. More particularly, it is an object of invention to provide an apparatus and method by which a processor sharing an address bus may identify each younger conflicting address bus operation and provide an appropriate response depending upon the result of the processor's own address bus operation.
The apparatus according to the invention comprises a contingent response unit included in each processor of a multiple processor system. Each contingent response unit identifies each pending operation from the respective processor which specifies an address also specified in a younger or later operation from another processor. These matched pending operations from the respective processor are then tracked so that the response to the younger conflicting operation is contingent upon the result of the pending operation. Specifically, the contingent response unit makes the appropriate response to a younger conflicting operation only if the matched pending operation finishes the address bus pipeline successfully, that is, without receiving a retry snoop response.
Each contingent response unit includes a pending operation unit, a snoop pipeline including a plurality of pipeline stages, and a contingent response flag control arrangement associated with the snoop pipeline. When the pending operation unit for a first processor detects or snoops a conflicting address operation from a second processor, the contingent response flag control arrangement sets a contingent response flag in a first snoop pipeline stage for the matched operation. The matched operation comprises the pending operation from the first processor which specifies an address matched by the address specified in the younger conflicting operation. In addition to setting the contingent response flag, the contingent response flag control arrangement also causes the first snoop pipeline stage to store an identifier which identifies the matched pending operation. In the event that the matched operation receives a retry response from another processor or system resource, the contingent response flag control arrangement

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Contingent response apparatus and method for maintaining... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Contingent response apparatus and method for maintaining..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Contingent response apparatus and method for maintaining... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2520569

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.