Data processing: structural design – modeling – simulation – and em – Simulating electronic device or electrical system – Circuit simulation
Reexamination Certificate
1998-09-25
2001-01-09
Teska, Kevin J. (Department: 2763)
Data processing: structural design, modeling, simulation, and em
Simulating electronic device or electrical system
Circuit simulation
C710S120000, C710S120000, C713S401000, C716S030000, C716S030000
Reexamination Certificate
active
06173243
ABSTRACT:
BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates to devices and methods for testing the functionality of components of a computer system. This disclosure further relates to memory incoherent verification methodologies for testing the functionality of an HDL (Hardware Description Language) design of a computer system component.
2. Description of the Related Art
Reliability and efficiency of any computer system depend in part upon the system complexity as well as upon the measures taken to minimize or prevent occurrences of faulty operations. Various modem-day mechanisms, including parallel processing, high-speed microprocessors, RISC Reduced Instruction Set Computer) architectures and on-board hardware redundancies allow faster system performance and throughput while increasing system reliability. Particularly in relatively complex, high performance systems, it is important to provide means for testing the proper functionality of the system and to provide fault corrections both upon the design and manufacture of the various system components, and during system operation.
Monitoring and verifying the functionality of a particular design of a computer system component through software means has undoubtedly proven to be a reasonable alternative to manual verification methodologies and to purely hardware test systems. The use of software for system verification is not, however, entirely without its attendant costs. The time required to perform various tests may be lengthy. Furthermore, and perhaps of greater significance, is the fact that the overall accuracy or coverage of the tests may be limited, thus leading to indications of false failures or to undetected defects.
Corrupted bus cycles in a system can be a major source of loss of reliability or efficiency. It is not uncommon for bus cycles initiated by various bus masters to not reach their desired destinations, or to otherwise be corrupted. This can result from various factors, such as improper hardware design, faults in bus lines, physical or functional defects in chip or board fabrications, and faulty execution in system software routines. A key component in many computer systems, which manages bus cycles, is a bus bridge. Verification systems have thus been developed to verify the design of a bus bridge, either upon its initial design in a hardware description language such as Verilog or following actual hardware implementation.
Historically, verification systems for bus bridge designs have proven inadequate to preserve the right data and right address for every bus cycle in a system involving multiple transactions. One prior art method for testing the functionality of a bus bridge employs reference counts associated with each address being written to memory. The CPU decrements the count upon completion of the write operation. Although every address transaction is given its individual count, this method may fail to preserve address counts in bus bridge implementations that employ memory remapping.
False failures may also be caused by byte merging, i.e. merging of transactions to adjacent or contiguous memory addresses. In this scheme, for example, four cycles into the bus bridge may only create only one cycle out. This may result in the bus bridge erroneously incrementing in multiple counts without decrementing consistent with the merged transactions, thus leading to false indications of error.
Byte collapsing is another source of error in bus transaction verification schemes. In this approach, more than one write cycle to the same memory address may result in only the most recent data being preserved. In other words, the earlier data to the particular address may be overwritten by a later cycle. Here, as in byte merging, the bus bridge testing methodology may fail to preserve the correct address count.
Yet another source of error in previous bus transaction verification schemes is an occurrence of an aborted bus transaction. When cycles to addresses are aborted without being completed, failures occur if the system does not properly track and account for the aborted cycles.
Most transaction testing methodologies for bus bridges treat computer systems as address-based rather than cycle-based. Typical approaches to address and data verification do not take into account the internal states of a machine, nor any other internal states associated with a memory or other system cycles. Critical cycle oriented information may not be considered, causing inadequate verification. When critical bus cycle information is discarded, resolution of later arising cycle conflicts may not be completely sound.
Additionally, the overall performance advantage gained by implementing a software verification tool to test functionality of an HDL design of a computer system component primarily depends upon the relative ease with which a software may execute a complex test suite. This, in turn, depends upon the speed at which a test suite may implement its testing operations.
Historically, operation of a test suite has been dependent on the system configuration—e.g. the amount of memory populating the system, the number of memory banks, type of memory, addresses of PCI bus masters/slaves, modes of external devices such as AGP cards, etc.—under which a system component is being tested. In the area of verifying functionality of chipset devices—which are characterized by the primary function of processing incoming bus cycles and generating subsequent bus cycles on different busses as a result—there are a very large number of possible system configurations. Permuting a test suite across all of these options would result in extremely large numbers of tests.
Similarly, the testing in which a test stimulus is run and compared to a previous run may also limit the number of tests that can be run and compared. In such a testing, human evaluation is required when minor differences in test configuration occur. If a major architectural change occurs, then the whole test run must be re-evaluated in full. When the expected data is programmed into the test, traditionally, the tests themselves must be updated when the device under test is updated. The coupled nature of test stimulus and the checking mechanisms may critically restrict the exploitation of the full potential of the testing software.
In many systems, the verification of a device has utilized external memory areas that are required to be coherent in order to verify that data read is equivalent to data written. Typically, data is written to the memory and later read back, expecting that the data read will be identical to data written. For every read operation, a previous write operation must have been performed or a master initialization must have been carried out. This memory coherency requirement may, therefore, waste simulation time in setting up the external memory. It may also restrict the effective memory range from which the device under test can read.
Yet another source of potential limitation on the flexibility of test software is the evaluation of timing related functional problems in a device during a pre-manufacturing simulation. In the past, timing related functional problems were evaluated after manufacturing a device. But, with the advancement in technology, more pre-manufacturing simulation can be performed on a given device. Tests are written to hit specific boundaries or end cases defined by the designer of the device. However, verification of operation of the device across many variations in signal relationships is a key aspect in verifying proper operation of a complex logic product.
Therefore, a bus bridge testing methodology and mechanism are needed to monitor the states of a bus bridge in a computer system to thereby determine proper functionality. Knowledge of the bus bridge states will allow better determination of failures. It is also desirable to have a verification system that monitors and records bus bridge performance, and verifies correct behavior of a bus bridge cache master.
Furthermore, a verification methodology and mechanism are needed t
Askar Tahsin
Berndt Paul
Carter Hamilton
Ilic Jelena
LaVine Mark
Advanced Micro Devices , Inc.
Conley Rose & Tayon PC
Frejd Russell W.
Kivlin B. No{umlaut over ( e)}l
Teska Kevin J.
LandOfFree
Memory incoherent verification methodology does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Memory incoherent verification methodology, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Memory incoherent verification methodology will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-2534202