Method and apparatus for selectively compacting test responses

Error detection/correction and fault detection/recovery – Pulse or data error handling – Digital logic testing

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C714S732000

Reexamination Certificate

active

06557129

ABSTRACT:

TECHNICAL FIELD
This invention relates generally to testing of integrated circuits and more particularly relates to compaction of test responses used in testing for faults in integrated circuits.
BACKGROUND
As integrated circuits are produced with greater and greater levels of circuit density, efficient testing schemes that guarantee very high fault coverage while minimizing test costs and chip area overhead have become essential. However, as the complexity of circuits continues to increase, high fault coverage of several types of fault models becomes more difficult to achieve with traditional testing paradigms. This difficulty arises for several reasons. First, larger integrated circuits have a very high and still increasing logic-to-pin ratio that creates a test data transfer bottleneck at the chip pins. Second, larger circuits require a prohibitively large volume of test data that must be then stored in external testing equipment. Third, applying the test data to a large circuit requires an increasingly long test application time. And fourth, present external testing equipment is unable to test such larger circuits at their speed of operation.
Integrated circuits are presently tested using a number of structured design for testability (DFT) techniques. These techniques rest on the general concept of making all or some state variables (memory elements like flip-flops and latches) directly controllable and observable. If this can be arranged, a circuit can be treated, as far as testing of combinational faults is concerned, as a combinational or a nearly combinational network. The most-often used DFT methodology is based on scan chains. It assumes that during testing all (or almost all) memory elements are connected into one or more shift registers, as shown in U.S. Pat. No. 4,503,537. A circuit that has been designed for test has two modes of operation: a normal mode and a test, or scan, mode. In the normal mode, the memory elements perform their regular functions. In the scan mode, the memory elements become scan cells that are connected to form a number of shift registers called scan chains. These scan chains are used to shift a set of test patterns into the circuit and to shift out circuit, or test, responses to the test patterns. The test responses are then compared to fault-free responses to determine if the circuit under test (CUT) works properly.
Scan design methodology has gained widespread adoption by virtue of its simple automatic test pattern generation (ATPG) and silicon debugging capabilities. Today, ATPG software tools are so efficient that it is possible to generate test sets (a collection of test patterns) that guarantee almost complete fault coverage of several types of fault models including stuck-at, transition, path delay faults, and bridging faults. Typically, when a particular potential fault in a circuit is targeted by an ATPG tool, only a small number of scan cells, e.g., 2-5%, must be specified to detect the particular fault (deterministically specified cells). The remaining scan cells in the scan chains are filled with random binary values (randomly specified cells). This way the pattern is fully specified, more likely to detect some additional faults, and can be stored on a tester.
FIG. 1
is a block diagram of a conventional system
10
for testing digital circuits with scan chains. External automatic testing equipment (ATE), or tester,
12
applies a set of fully specified test patterns
14
one by one to a CUT
16
in scan mode via scan chains
18
within the circuit. The circuit is then run in normal mode using the test pattern as input, and the test response to the test pattern is stored in the scan chains. With the circuit again in scan mode, the response is then routed to the tester
12
, which compares the response with a fault-free reference response
20
, also one by one. For large circuits, this approach becomes infeasible because of large test set sizes and long test application times. It has been reported that the volume of test data can exceed one kilobit per single logic gate in a large design. The significant limitation of this approach is that it requires an expensive, memory-intensive tester and a long test time to test a complex circuit.
These limitations of time and storage can be overcome to some extent by adopting a built-in self-test (BIST) framework as shown in FIG.
2
. In BIST, additional on-chip circuitry is included to generate test patterns, evaluate test responses, and control the test. For example, a pseudo-random pattern generator
21
is used to generate the test patterns, instead of having deterministic test patterns. Additionally, a multiple input signature register (MISR)
22
is used to generate and store a resulting signature from test responses. In conventional logic BIST, where pseudo-random patterns are used as test patterns, 95-96% coverage of stuck-at faults can be achieved provided that test points are employed to address random-pattern resistant faults. On average, one to two test points may be required for every 1000 gates. In BIST, all responses propagating to observable outputs and the signature register have to be known. Unknown values corrupt the signature and therefore must be bounded by additional test logic. Even though pseudo-random test patterns appear to cover a significant percentage of stuck-at faults, these patterns must be supplemented by deterministic patterns that target the remaining, random pattern resistant faults. Very often the tester memory required to store the supplemental patterns in BIST exceeds 50% of the memory required in the deterministic approach described above. Another limitation of BIST is that other types of faults, such as transition or path delay faults, are not handled efficiently by pseudo-random patterns. Because of the complexity of the circuits and the limitations inherent in BIST, it is extremely difficult, if not impossible, to provide a set of test patterns that fully covers hard-to-test faults.
Some of the DFT techniques include compactors to compress the test responses from the scan chains. There are generally two types of compactors: time compactors and spatial compactors. Time compactors typically have a feedback structure with memory elements for storing a signature, which represents the results of the test. After the signature is completed it is read and compared to a fault-free signature to determine if an error exists in the integrated circuit. Spatial compactors generally compress a collection of bits (called a vector) from scan chains. The compacted output is analyzed in real time as the test responses are shifted out of the scan chains. Spatial compactors can be customized for a given circuit under test to reduce the aliasing phenomenon, as shown in the U.S. Pat. No. 5,790,562 and in few other works based on multiplexed parity trees or nonlinear trees comprising elementary gates such as AND, OR, NAND, and NOR gates.
Linear spatial compactors are built of Exclusive-OR (XOR) or Exclusive-NOR (XNOR) gates to generate n test outputs from the m primary outputs of the circuit under test, where n<m. Linear compactors differ from nonlinear compactors in that the output value of a linear compactor changes with a change in just one input to the compactor. With nonlinear compactors, a change in an input value may go undetected at the output of the compactor. However, even linear compactors may mask errors in an integrated circuit. For example, the basic characteristic an XOR (parity) tree is that any combination of odd number of errors on its inputs propagates to their outputs, and any combination of even number of errors remains undetected.
An ideal compaction algorithm has the following features: (1) it is easy to implement as a part of the on-chip test circuitry, (2) it is not a limiting factor with respect to test time, (3) it provides a logarithmic compression of the test data, and (4) it does not lose information concerning faults. In general, however, there is no known compaction algorithm that satisfies all the above criteria. In particular, it is

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Method and apparatus for selectively compacting test responses does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Method and apparatus for selectively compacting test responses, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method and apparatus for selectively compacting test responses will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3047391

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.