Error detection/correction and fault detection/recovery – Pulse or data error handling – Digital logic testing
Reexamination Certificate
1999-10-25
2003-12-09
Decady, Albert (Department: 2133)
Error detection/correction and fault detection/recovery
Pulse or data error handling
Digital logic testing
C714S005110, C324S528000, C324S759030
Reexamination Certificate
active
06662323
ABSTRACT:
BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates generally to the process of determining faults in a circuit. More specifically, the diagnosis method is used to determine error sites in a combinational circuit that has been determined to be inequivalent to its specification.
2. Description of the Related Art
The need for design validation and early detection of errors is well recognized. Formal methods for combinational verification have gained wide acceptance in the digital hardware design community in the recent past. In fact, it appears that tools based on these techniques have captured significant market share from gate-level simulation tools. Arising out of this phenomenon is the opportunity to promote the use of automatic error diagnosis tools. Automatic error diagnosis is even more important in the context of automatic verification since, unlike in the case of the simulation of manually generated vectors, the designer usually has little up front knowledge of the functionality exercised by the error vectors generated as counter-examples by the formal verification tool. Because of this, there is a need for new techniques for error diagnosis in combinational verification.
In combinational verification, the equivalence between the Boolean expressions for the implementation and specification is checked. The use of Binary Decision Diagrams (BDDs) for combinational verification is common. (See, R. Bryant, “Graph based algorithms for Boolean function manipulation” IEEE Transactions on Computers, C-35(8):677-691, August 1986.) PODEM-based or Boolean satisfiability (SAT) based ATPG-like techniques can also be effective in many cases where BDDs cannot be used. (See, D. Brand, “Verification of large synthesized circuits”, Proceedings of ICCAD, pp. 534-537, 1993; S. Reddy, W. Kunz and D. Pradhan, “Novel verification framework combining structural and OBDD methods in a synthesis environment”, Proceedings of DAC, pp. 414-419, 1995; and J. Silva and K. Sakallah, “Grasp-A new search algorithm for satisfiability”, Proceedings of ICCAD, pp. 220-227, 1996). The use of combinations of BDDs and ATPG-like techniques has also been proposed. (See, J. Burch and V. Singhal, “Tight integration of combinational verification methods”, Proceedings of ICCAD, pp. 570-576, 1998; A. Gupta and P. Ashar, “Integrating a Boolean satisfiability checker and BDDs for combinational verification”, Proceedings of VLSI Design 98, pp. 222-225, 1998; J. Jain, R. Mukherjee and M. Fujita, “Advanced verification techniques based on learning”, Proceedings of DAC, June 1995; and S. Reddy, W. Kunz and D. Pradhan, IBID).
All these techniques basically try to prove that the XOR of the corresponding outputs in the two representations (the output of the “miter circuit”) is tautologically zero. The BDD-based method does so by building the BDD for the output of the XOR gate (the “error BDD”). SAT based methods typically represent the functionality of the miter circuit in Conjunctive Normal Form (CNF) and apply a branch-and-bound algorithm to exhaustively check if the output of the miter circuit can be set to ‘1’ (true) for any input combination.
If an error is found in the implementation, all the verification techniques are equipped to determine the vectors exercising the error (the “error vectors”). In the case of the BDD-based method, all the error vectors are encapsulated in the error BDD. In the SAT method, they are produced in the form of cubes. Diagnosis information can be derived by a detailed analysis of the internal behavior of the implementation circuit for these error vectors. Various techniques have been proposed to perform this task. Several of these techniques are discussed below.
Complementation Method
The complementation method uses the following technique: Given an error vector, it is simulated once on the implementation circuit and the value produced at each wire in the circuit is recorded. In the next step, for each wire in the circuit, the value on the wire is complemented and the effect of the complementation is propagated by simulation to the primary outputs. If the value on some erroneous output gets corrected by the complementation and the values on all the correct outputs remain unchanged, the wire that was complemented is considered a potential error site. Its count is correspondingly incremented by 1. After a large number of error vectors has been simulated in this manner, the wires with the largest counts are considered the most likely to be error sites. A heuristic could be to pick the 10% of the wires with the highest counts. In the case of a single error in the circuit, the actual error site is guaranteed to be one of the sites with the highest count. In the presence of multiple errors affecting the same primary output, the actual error sites are likely to have high counts, but are not guaranteed to have the largest count. (See, S. Huang, K-C Chen, and K-T Cheng, “Error Correction Based on Verification Techniques”, Proceedings of DAC, pp. 258-261, 1996).
The complementation method is simulation intensive. In general, each node in the transitive fanin cone of the erroneous output is a potential error site. Each error vector is simulated once for the entire circuit and then repeatedly for the fanout cone of each site being evaluated. Given a fixed amount of time, the quality of this method (and of the other methods described in this section) depends on the number of error vectors simulated. While it leads to the desired pruning out of non-error sites, its quality will suffer rapidly in a naive application as the size of the circuit, and thereby the number of potential error sites increases. To speed up this method, one needs to make the core simulation routines very fast and prune the number of candidate error sites before applying the method.
An example specification and its incorrect implementation are shown in
FIGS. 1 and 2
.
FIG. 3
shows net h being complemented, and its fanout being simulated again for the error vector 001. It can be seen that the erroneous output z gets corrected as a result, while the correct output y remains unchanged.
Path Backtrace Based Method
Another method that is used tries to identify error sites by tracing sensitized paths back from erroneous outputs for each error vector. (See, A. Kuehlman, D. Cheng, A. Srinivasan and D. LaPotin, “Error diagnosis for transistor-level verification”, Proceedings of DAC, pp. 218-223, 1994). As in the simulation-based method, a count is maintained for the number of error vectors for which a site is on such a sensitized path. Sites with the largest counts are considered the most likely to be the actual error sites. As in the simulation-based method, the error vector must be simulated once and the values noted for each wire. The difference is that instead of simulating repeatedly after complementation of each site, sites on sensitized paths to the erroneous output are identified in a single pass through the implementation circuit. On the other hand, this backtrace method is also likely to tag many more sites as potential error sites than the complementation method. As a result, it is faster, but results in less localization.
FIG. 4
shows the sensitized paths to the erroneous output being traced backward by Kuehlmann's method. The wires shown in gray have their counts incremented for the error vector 001. Note that in this case, all the wires would also have been tagged by the complementation method, except that the backtrace method does it in a single pass through the circuit.
X-Analysis Method
Unlike the backtrace method, the X-analysis method analyzes the circuit from the input for each error vector using a technique which is somewhat like what designers use when diagnosing errors manually. (See, M. Tomita, H. Jiang, T. Yamamoto and Y. Hayashi, “An algorithm for locating logic design errors”, Proceedings of ICCAD, pp. 468-471, November 1990). Given an error vector V, this method first tries to find a second vector V′ which is not an error vector and which differs from V in a single input
Ashar Pranav
Gupta Aarti
De'cady Albert
NEC Corporation
Sughrue & Mion, PLLC
Whittington Anthony T.
LandOfFree
Fast error diagnosis for combinational verification does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Fast error diagnosis for combinational verification, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Fast error diagnosis for combinational verification will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3176913