Automated method for testing cache

Computer-aided design and analysis of circuits and semiconductor – Nanotechnology related integrated circuit design

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C711S141000, C714S718000

Reexamination Certificate

active

06446241

ABSTRACT:

TECHNICAL FIELD OF THE INVENTION
The present invention is related to methods for testing cache memory systems.
BACKGROUND OF THE INVENTION
A typical data processing system includes at least a central processing unit (CPU), a memory unit (main memory), and an input/output (I/O) unit. The main memory stores n addressable storage locations information that is either data or instructions for operating on the data. The information is transferred between the main memory and the CPU by a bus.
If the CPU operates within a fast clock and the response time of the main memory is slow compared to the CPU clock, the CPU must enter a wait state until the request is completed and the CPU processing rate is reduced. This is especially problematic for highly pipelined CPUs, such as reduced instruction set computers (RISCs).
Main memory is typically not fast enough to execute memory accesses as needed by a high speed CPU. To get a high speed execution rate of instructions, a small block of fast memory (called a cache) may be placed between the CPU and the main memory. Such cache memories are used to compensate for the time differential between the memory access time and the CPU clock frequencies. The access time of a cache is closer to the operational speed of the CPU and thus increases the speed of data processing by providing information to the CPU at a rapid rate. In addition, multiple levels or layers of cache may be placed between the CPU and main memory.
A cache memory is typically organized into blocks, each of which is capable of storing a predetermined amount of information. Specifically, each block may contain the data and/or instructions needed by the CPU, as well as control flags indicating the status of the data and/or instructions. When a CPU requires information, the cache is initially examined. If the information is not found in the cache known as a cache miss, then either the next higher level cache or the main memory is accessed. For a cache miss, a read request is then issued by the cache controller to transfer a block of information including the required information from the higher level cache or main memory to the CPU and cache. The information is retrieved from the higher level cache or main memory and loaded into a cache block and the cache block is assigned a cache address. The cache address typically includes an index field and a tag field, which correspond to address fields of the location in main memory from which the information was retrieved.
Because data may be used by any program, copies of the same information may reside in more than one place in the memory system. For example, copies of the same information may reside in multiple caches and in the main memory unit. If the CPU modifies a copy of that information in one cache without updating the remaining copies, those remaining copies of information become stale, thus creating a cache-coherency problem.
The prior art includes many solutions for the cache-coherency problem. These solutions are typically based upon the states of the control flags associated with the information contained in a cache. For example, the control flags may include a hit flag that indicates whether a cache hit occurred during a particular cache access, a valid flag that indicates whether the information contained in a cache block is valid, a modified or dirty flag that indicates whether the valid cache block has been modified from the information stored in the corresponding location in main memory while in the cache, and a shared flag that indicates whether the valid information contained in the block is also stored in another cache.
Designers of these types of data processing systems increasingly employ high-level software languages to specify their designs. For example, a given hardware model may have a high-level representation in VHDL, which is a hardware description language, or RTL. The designer may then employ software tools to develop an appropriate circuit design from the given high-level description. An important operation in this process is the verification or validation of the high-level description. If the high-level description contains inaccuracies or produces incongruous results, then the circuit designed from that description will contain errors or faults. To validate the description, the designer must ensure that the set of all possible input configurations produces the proper responses. One method of performing the validation step is to apply every possible input configuration to the high-level description and observe the results. However, for designs of large complexity, such as a multilevel cache memory system which will have large sets of distinct input configurations, such a method is impractical.
Cache control logic is typically used to examine the states of these control flags and perform appropriate operations in accordance with a particular cache-coherency protocol. During the design verification stage of product development, this control logic must be tested to ensure that it is operating properly. The CPU is typically programmed to test the control logic by directly examining the control flags and data in the cache under certain conditions and comparing their states with the results obtained by the control logic under the same conditions.
SUMMARY OF THE INVENTION
Therefore, it is among the objects of the invention to provide a method to verify the operation of cache control logic by examining the states of its cache.
Another of object of the invention is to provide a method to efficiently detect all legal cache states in a multilevel cache memory system. The method starts with a list of all possible input transactions having an effect on the state of the cache, an initial cache state and an initial sequence of input transactions to reach the initial cache state.
The method generates a list of allowed states by associating the list of all possible input transactions with each legal cache state, starting with the initial cache state. In the preferred embodiment the list is generated by applying each input transaction sequentially to all found legal cache states. This is begun by initializing the legal cache states and the search cache states as the initial cache state. If application of an input transaction to a current search cache results in a new cache state, then this new cache state is added to the list of legal cache states and to the list of search cache states. This is repeated for all input transactions and all such found legal cache states. At the same time a sequence of input transactions reaching each new cache state is formed. This new sequence is the sequence of input transactions for the prior cache state and the current input transaction.
The method then generates a series of test sequences from the list of allowed states and their corresponding sequence of input transactions. These may be made in a directed random manner. The series of test sequences are applied to the control logic cache design and to a reference memory. The results are compared. If the response of the control logic cache design fails to match the response of the reference memory then a design fault is detected.


REFERENCES:
patent: 5038307 (1991-08-01), Krishnakumar et al.
patent: 5045996 (1991-09-01), Barth et al.
patent: 5163016 (1992-11-01), Har'El et al.
patent: 5394347 (1995-02-01), Kita et al.
patent: 5406504 (1995-04-01), Denisco et al.
patent: 5513122 (1996-04-01), Cheng et al.
patent: 5813028 (1998-09-01), Agarwala et al.
patent: 5875462 (1999-02-01), Bauman et al.
patent: 5960457 (1999-09-01), Skrovan et al.
patent: 5996050 (1999-11-01), Carter et al.
patent: 6145059 (2000-11-01), Arimilli et al.
patent: 6170070 (2001-01-01), Ju et al.
patent: 6247098 (2001-06-01), Arimilli et al.
patent: 6330643 (2001-12-01), Arimilii et al.
patent: 6334172 (2001-12-01), Arimilli et al.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Automated method for testing cache does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Automated method for testing cache, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Automated method for testing cache will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2816467

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.