Simulation environment cache model apparatus and method...

Data processing: structural design – modeling – simulation – and em – Simulating electronic device or electrical system – Computer or peripheral device

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C703S022000, C711S113000, C711S118000

Reexamination Certificate

active

06542861

ABSTRACT:

TECHNICAL FIELD
The present invention relates in general to data processing systems, and in particular, to cache event triggering in simulations of processor systems.
BACKGROUND INFORMATION
It is commonplace, in modern data processing systems, to include high speed memory, called caches, to improve the performance of memory transactions. Typically, the central processing unit (CPU) includes an amount of cache memory, which can be accessed by the processor core very quickly. This cache is commonly referred to as the level one (L1). Additional levels of cache, which may be either internal or external to the CPU, may be included between the L1 cache and main memory. A next level of cache is typically referred to as level two (L2) cache, and additional levels may be labeled in similar fashion.
CPU memory transactions cause traffic between the L1 and L2 caches, or more generally between lower level (LL) and higher level (LH) caches in a data processing system including multiple levels of cache memory. For example, a “castout” occurs when there is a cache miss in the L1 cache, and the cache line to be replaced, to make room for the line to be fetched from the L2 cache or main memory, has been modified. Then, the modified cache line is written to, or castout, to the L2 cache. Similarly, a “push” operation occurs if a snoop hit is detected in which the snooped location corresponds to a modified line in the L1 cache. Then, the line is “pushed” to main memory or to the requesting bus device, which may be a second CPU in a multiprocessor system.
In a simulation environment, in order to fully simulate the L2 cache control logic, maximum traffic from the L1 cache to the L2 cache should be generated. Previously, either no cache model (an event generator that emulates any legal function of the cache) for providing maximum traffic was built into the L1 event generator (event generators are typically referred to as “irritators”), or a cache model for providing maximum traffic was implemented in the L1 imitator but encountered difficulties in providing maximum traffic because most of the data was cached in the L1 model, and cache block movement did not occur until a modified cache line was selected for replacement. Thus, in the latter case, L1 to L2 traffic was not generated until induced by the instruction stream in The test case under simulation. Consequently, in both circumstances, there are problems in generating sufficient traffic to ensure that “corner” cases were covered. Corner cases refer to L2 control logic states that occur infrequently. Then, the simulations may fail to uncover cache flaws, or “bugs.”
Therefore, there is a need in the art for a mechanism to mitigate against untested comer cases. In particular, there is a need in the art for a cache irritator mechanism to generate traffic rates between cache levels, for example between L1 and L2 caches, sufficient to stress the cache control logic of the LH cache, and to generate critical block movements from the LL cache to the LH cache.
SUMMARY OF THE INVENTION
The aforementioned needs are addressed by the present invention. Accordingly, there is provided, in a first form, a method for cache model simulation. The method includes providing a predetermined set of cache block movement event protocols. An event protocol is selected from the predetermined set, and a castout of lines in a first cache is performed in response to the protocol.
There is also provided, in a second form, a data processing system for cache model simulation. The system contains circuitry operable for providing a predetermined set of cache block movement event protocols. Also included is circuitry operable for selecting an event protocol from the predetermined set, and circuitry operable for performing a castout of lines in a first cache in response to the protocol.
Additionally, there is provided, in a third form, a computer program product operable for storage on a machine readable storage medium, wherein the program product is operable for cache model simulation. The program product has programming for providing a predetermined set of cache block movement event protocols, and programming for selecting an event protocol from the predetermined set. Programming for performing a castout of lines in a first cache in response to the protocol is also included.
The foregoing has outlined rather broadly the features and technical advantages of the present invention in order that the detailed description of the invention that follows may be better understood. Additional features and advantages of the invention will be described hereinafter which form the subject of the claims of the invention.


REFERENCES:
patent: 4317168 (1982-02-01), Messina et al.
patent: 5088058 (1992-02-01), Salsburg
patent: 5247653 (1993-09-01), Hung
patent: 5452440 (1995-09-01), Salsburg
patent: 5737751 (1998-04-01), Patel et al.
patent: 5740353 (1998-04-01), Kreulen et al.
patent: 5802571 (1998-09-01), Konigsburg et al.
patent: 5845106 (1998-12-01), Stapleton
patent: 5940618 (1999-08-01), Blandy et al.
patent: 6059835 (2000-05-01), Bose
patent: 6173243 (2001-01-01), Lowe et al.
patent: 6240490 (2001-05-01), Lyles, Jr. et al.
Prete et al, “The ChARM Tool for Tuning Embedded Systems”, IEEE Micro, vol. 17 Issue 4, pp. 67-76 (Jul.-Aug. 1997).*
Reference to Hong et al, “Design and Performance Evaluation of an Adaptive Cache Coherence Protocol”, IEEE Proceedings of the 1998 International Conference on Parallel and Distributed Systems, pp. 33-40 (Dec. 1998).*
Dahlgren, “Boosting the Performance of Hybrid Snooping Cache Protocols”, IEEE 22nd Annual International Symposium on Computer Architecture, pp. 60-69 (Jun. 1995).
Grahn, Evaluation of Design Alternatives for a Directory-Based Cache Coherence Protocol in Shared-Memory Multiprocessors Doctoral Thesis, 1995 (text downloaded from pdf link at: http://citeseer.nj.nec.com/grahn95evaluation.html).
“Method for Predicting the Performance of Set-Associative Cache Memories”,IBM Technical Disclosure Bulletin,vol. 31 No. 8, Jan. 1989, pp. 275-276.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Simulation environment cache model apparatus and method... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Simulation environment cache model apparatus and method..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Simulation environment cache model apparatus and method... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3084872

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.