Verification of cache prefetch mechanism

Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C711S204000, C712S207000, C714S032000, C714S036000

Reexamination Certificate

active

06412046

ABSTRACT:

TECHNICAL FIELD
The technical field is computer memory systems using cache prefetch mechanisms.
BACKGROUND OF THE INVENTION
An agent, such as a processor or a host I/O bridge, in a computer system may need to access or read one or more cache lines from the computer system's main memory. These lines are normally written to a cache, and the corresponding cache lines are accessed by the agent. Fetching lines from the main memory, writing them to the cache, and then accessing the cache lines imposes a delay known as memory latency. Modem computer systems use prefetch mechanisms to hide this memory latency. A cache controller may prefetch one or more cache lines either speculatively or explicitly based on a hint or other alerting mechanism. When a cache line is requested without the requesting agent taking ownership of the cache line (i.e., a snapshot), prefetching may only occur based on explicit hints. The cache line then maybe discarded after use to make sure that a stale copy of the cache line is not retained in the cache.
As with any mechanism in a computer system, a designer wants to ensure the prefetch mechanism is functioning as designed. Verifying prefetch involves making sure that the cache does not overprefetch or underprefetch. Overprefetching occurs when the cache controller requests one or more cache lines beyond That is desired. Such overprefetching may occur as a result of a design defect. For example, a cache line size may comprise 64 bytes. In this example, the prefetch mechanism is intended to prefetch four cache lines ahead (i.e., the prefetch depth is four) but the prefetch mechanism is designed not to prefetch memory lines beyond a four kilobyte block boundary of the main memory. If an agent starts consuming cache lines at address 0, the cache controller fetches address 0 and prefetches for the next four cache lines at addresses 0×40, 0×80, 0×C0, and 0×100. When the agent then starts consuming from cache line 0×40, the cache controller will prefetch the cache line 0×140, assuming the cache lines 0×80, 0×C0, and 0×100 are present in the cache from the previous prefetch. If the cache controller prefetches the next cache line 0×180, or any other cache line beyond 0×140 when the agent is consuming the cache line 0×40, the cache controller is said to have overprefetched. In this example, the cache controller has violated the prefetch depth by prefetching beyond the desired prefetch depth (i.e., prefetched beyond the desired prefetch of four cache lines). Overprefetching can also occur if a block boundary is exceeded. Continuing with the example, if the agent started reading from the cache line 0×F00, and the cache controller prefetched the line 0×1000, which is four lines ahead of 0×F00, the boundary restriction would be violated since no prefetching is desired if such prefetching crosses the current four kilobyte page boundary.
Overprefetching has two disadvantages: performance, and if data is requested as a snapshot (or will be converted to a snapshot due to a snoop), then correctness. Performance is adversely affected since cache lines may be fetched that are not immediately needed and may never be used in the present processing stream, causing loss in bandwidth and reduction of available space in the cache. In the case of snapshots, the prefetches are not speculative. Thus, if the explicit hints are violated and prefetching occurs beyond the prefetch depth, stale data may be retained in the cache. This stale data may be given to some other requesting agent and may cause data corruption.
Underprefetch may occur when the cache controller prefetches less than what the cache controller could prefetch. For example, if an agent is reading from cache line 0×0 and the cache controller prefetches only lines 0×40, 0×80, and 0×C0, without prefetching line 0×100, and without having any internal or external cause that prevented prefetching line 0×100, then an underprefetch condition exists. A cache controller may not prefetch a line due to a number of circumstances, including the cache being full or almost full, or some other flow-control condition. The cache controller, however, may resume prefetch when these conditions clear. Prefetch may also be validly terminated if another request stream is provided along the same channel. The potential consequence of underprefetch is performance. A cache line may not be present in a cache when a requesting agent needs the cache line because the cache controller has underprefetched.
Existing schemes to insure that prefetch is working as desired rely on human observation. A prefetch verification mechanism may involve viewing a simulation trace and determining if the cache is prefetching adequately. Other approaches involve looking at cache/system performance from a simulation. If the prefetch performance is not optimum, then prefetch may be checked manually as a potential cause. The disadvantage of this approach is that it is not automated and does not insure proper prefetching under all operating (traffic) conditions. In particular, if a design is changed after the prefetch mechanism is originally verified to be operating properly, a subsequent prefetch verification maybe impossible. In cases where an agent obtains a cache line without ownership (i.e., snapshot) the designer may rely on normal data consistency checks to detect any prefetch problems. However, in many cases, stale data (cache lines) may not remain in the cache long enough for another request stream (i.e., a request from another agent) to obtain the stale data In this case, the improper prefetch would go undetected. Other approaches for verifying proper prefetch include simulating state machines to operate in parallel using a programming language such as C and then comparing the results on a cycle-by-cycle basis. The disadvantage is that such a metachecker is difficult to write and has to be constantly modified along with the overall design. In this case, the metachecker may fail to detect improper prefetch under some operating conditions.
SUMMARY OF THE INVENTION
A method and apparatus is provided that automatically and easily verifies a cache line prefetch mechanism. The verification method is exact in the sense that the method includes a strict definition of which cache lines should be prefetched and which cache lines should not be prefetched. The method also emphasizes comer-case operating conditions. For example, by exercising boundary conditions, the method by stresses situations in which a microprocessor or host bridge chip is likely to produce errors. The method can verify prefetch without having to access or view any internal signals or buses inside the chip. Thus, the method can be adopted in any system-level verification methodology.
The method can be used in a system-level test set up along with a chip-level test set up without requiring knowledge of the internal state of the chip. In this case, checking is done at the chip boundary. The method is automated and performs strict checks on overprefetch, underprefetch, and the relative order in which fetch and prefetches must occur. The method may be executed in a simulation, an emulator, or in actual hardware.


REFERENCES:
patent: 5386521 (1995-01-01), Saitoh
patent: 0370926 (1990-05-01), None

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Verification of cache prefetch mechanism does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Verification of cache prefetch mechanism, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Verification of cache prefetch mechanism will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2920661

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.