Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories
Reexamination Certificate
1999-12-30
2003-05-06
Hudspeth, David (Department: 2651)
Electrical computers and digital processing systems: memory
Storage accessing and control
Hierarchical memories
C711S163000, C711S207000
Reexamination Certificate
active
06560675
ABSTRACT:
FIELD OF THE INVENTION
The present invention generally relates to cache memory devices. More particularly, the present invention relates to routing cache information from a cache memory.
BACKGROUND OF THE INVENTION
Traditionally, increases in microprocessor speeds have outpaced the speeds of other component modules that communicate with the microprocessor over a memory bus. For example, main memory storage modules, like RAM, are often significantly slower than the microprocessor. As a result, the microprocessor often must wait for the slower main memory in order to execute instructions. This fails to take advantage of the developments in microprocessor technology and reduces overall efficiency by causing bottlenecks on the memory bus.
Cache memory devices have been created to reduce the inefficiencies associated with main memory modules. Cache is a faster response memory than main memory. In addition, cache often is located on the same chip as the microprocessor, and thus instructions and data requested by the microprocessor do not have to travel to the slower memory bus. When the microprocessor wishes to execute an instruction or retrieve data, it first checks the cache to determine whether the required instruction or data is available in cache. Cache is designed to store instructions and data that statistically are more likely to be needed by the microprocessor. When the microprocessor requests an instruction or data that resides in the cache, a cache “hit” occurs and the cache quickly provides the information to the microprocessor. When the microprocessor requests information that is not in the cache, a cache “miss” occurs and the microprocessor must retrieve the information from the slower main memory via the main memory bus. Following a cache “miss,” the unmatching data located in the cache is replaced with the most recently requested information from the microprocessor. Often, the unmatching data must be removed from the cache and sent back to main memory via the main memory bus. This process is commonly referred to as a “Replace.”
In the past, cache was designed to be synchronous, or at least in lock step, with the main memory bus. Recently, however, cache devices have been designed to operate on a core clock domain, asynchronous to the bus clock domain on which the main memory bus operates. As a result, during the Replace, the non-matching data must cross an asynchronous clock boundary as it leaves the cache and enters the main memory bus.
The asynchronous boundary causes a problem when another agent on the main memory bus (e.g., another processor) requests the non-matching data as it is in transit from the cache to the main memory bus. This request from another agent is commonly called a “Return.” A Return is asynchronous to a Replace because the Replace is delivered to the main memory bus domain from the core clock domain and the Return is requested by an agent on the memory bus domain. Because a Replace and a Return are conducted independent of one another, it is possible for a Return to request information that is in transit back to main memory as the result of a Replace operation. Currently, cache memory does not consider information in transit to the memory bus to be within its domain. As such, for information in transit, the cache will respond to a Return request in the negative, thus requiring the agent to query main memory. However, because of various buffers and protocols, the agent may retrieve the requested information from main memory before the Replace information has reached its main memory destination. As a result, the agent may receive corrupted (i.e., not yet updated) data from main memory.
When a Replace and a Return are conducted on synchronous clocks (i.e., when cache operations are synchronous with memory bus operations), this conflict may be resolved through timing techniques. For example, a Return request may be required to wait a certain number of clock cycles before retrieving information from main memory in order to ensure that the data will be updated by a Replace. However, when a Replace and a Return are conducted on asynchronous clocks (i.e., when cache operations are asynchronous with memory bus operations), it impossible to resolve to a particular clock when the Replace information is written to the memory bus relative to an incoming Return requested by another agent.
Therefore, it would be advantageous to be able to compare a Return request with Replace information as the Replace information is in transit across the asynchronous boundary from the cache to the main memory bus.
SUMMARY OF THE INVENTION
The present invention provides a method and computer system that compares a portion of a signal and information transferred from a cache memory, while the information is in transit from the cache memory. The information may be routed differently depending on the outcome of the compare. Specifically, the information may be delivered to a memory bus when it matches the portion of the signal and when the signal is a read command. If the information does not match the portion of the signal, it may be transferred to a main memory via the memory bus. The information may be compared to the portion of the signal for a first time interval, and the portion of the signal may be compared to the information for a second time interval. The information is transferred from the cache memory on a first clock signal, while the signal is provided by the memory bus on a second clock signal asynchronous with the first clock signal.
The signal may be a request provided by an agent coupled to the memory bus. The information may be stored in a first buffer element on a first clock signal, while the request signal may be stored in a second buffer element on a second clock signal that is asynchronous to the first clock signal. In this instance, a comparator, coupled to the buffer elements, compares the information in the first buffer element to a portion of the request signal in the second buffer element for a first time interval. The first time interval may include multiple clock pulses up to a clock pulse in which the second buffer element receives the request signal. The comparator then compares the portion of the request signal to the information for a second time interval. The second time interval may include one clock pulse after the first buffer element receives the information. Both compares may be conducted on the first clock signal.
Other features of the present invention are disclosed below.
REFERENCES:
patent: 4805098 (1989-02-01), Mills, Jr. et al.
patent: 5224214 (1993-06-01), Rosich
patent: 5295253 (1994-03-01), Ducousso et al.
patent: 5339399 (1994-08-01), Lee et al.
patent: 5469558 (1995-11-01), Lieberman et al.
patent: 5485592 (1996-01-01), Lau
patent: 5689680 (1997-11-01), Whittaker et al.
patent: 5857082 (1999-01-01), Murdoch et al.
patent: 5860112 (1999-01-01), Langendorf et al.
patent: 5900012 (1999-05-01), Tran
patent: 6032229 (2000-02-01), Hotta et al.
patent: 6205514 (2001-03-01), Pawlowski
patent: 6223258 (2001-04-01), Palanca et al.
Aho Eric D.
Bolyn Philip C.
Rode Lise A.
Starr Mark T.
Tzeng Fred F.
Unisys Corporation
Woodcock & Washburn
LandOfFree
Method for controlling concurrent cache replace and return... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Method for controlling concurrent cache replace and return..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method for controlling concurrent cache replace and return... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3047698