Self-adjusting burn-in test

Electricity: measuring and testing – Fault detecting in electric circuits and of electric components – Of individual circuit component or element

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C324S763010, C702S118000, C714S733000

Reexamination Certificate

active

06326800

ABSTRACT:

FIELD OF THE INVENTION
This invention relates generally to testing semiconductor wafers, chips and multi-chip modules and, more particularly, to a dynamic, adjustable, burn-in test to avoid over or under-burning effects, thereby improving reliability and performance margins of a technology.
BACKGROUND OF THE INVENTION
CMOS technology has evolved such that the computer market has rapidly opened up to a wide range of consumers. Today's multi-media computer uses over 300 MHz microprocessor with 64 MB main memory. In the near future, 1 Gb micro-processor with a 1 GB main memory will become commonplace, which suggests a potentially strong research and development for Giga-Hz and Giga-Byte products, particularly, in products using deep sub-micron technology. Despite the performance, density, and lithographic difficulties, it is even more important in the integrate circuit (IC) marketplace to offer customers a chip which is highly reliable at a competitive cost. To achieve reliability, burn-in testing at the module level is performed to identify weak chips that may be present before they are shipped to a customer. Defective chips found at module burn-in are routinely discarded since they are not readily repairable by on-chip redundancy replacement configurations. This results in higher module costs and lost chip yield.
In view of this problem, and as a possible solution to resolve it, an improved wafer burn-in testing is described herein. Once a weak chip at the wafer level has been detected, it may be repaired if the number of defective elements are less than the number of available redundant repair circuits.
Presently, wafer burn-in methodology applies a static burn-in condition. This is accomplished by accelerating the rate at which marginally functioning cells will fail with time. Practitioners in the art will fully realize that reliability fails have a propensity of occurring over a period of time. There are a number of failures that occur the first time a chip is powered-up. However, there is, additionally, a number of failures occurring subsequently as time passes by. The goal is to ship only chips where, ideally, little likelihood exists of additional fails showing up. Since this may easily take days of power-up operation before the number of failures stabilizes, various methods have been implemented to accelerate this process. One of most commonly used approaches to accelerate this process is to apply a voltage which is 1.5× higher the normal power supply voltage. However, because of process parameter variation in the fabrication of the chip, using 1.5× voltage to a chip can either cause over burn-in or under burn-in. With over burn-in, a reliable chip is destroyed; with under burn-in, a chip which is not reliable may be shipped.
The goal of wafer burn-in is to guarantee chip reliability, without destroying chips and reducing in the process manufacturing yield. It is difficult to realize this goal when a static burn-in condition (1.5× voltage) is provided for all chips at burn-in, unless the chip is over designed, in which instance chip performance is sacrificed. What is needed is a way of applying the best level of burn-in voltage at all times during testing. In the design of a chip, there exists a range of variables which must be taken into account and balanced against each other. These variables include wafer yield and key chip performance parameters such as speed, access time, and power dissipation. This collection of variables define what is referred to as the design space. Clearly, various trade-offs exist between getting the best performance versus an optimum wafer yield versus reliability. Traditionally, the very best possible performance is sacrificed for wafer yield and reliability so that the cost per chip is kept low. A large part of this trade-off involves using a lower internal voltage supply for better burn-in and selecting reliability over keeping the internal voltage supply high for better performance.
FIG. 1
shows an example that illustrates some drawbacks found in conventional static burn-in methods applicable to memory chips. Pass gate
100
couples an input to an output terminal, the output terminal being coupled to a load capacitance C
110
. The resistance of the pass gate is R. The time constant t from input to output is therefore R×C. It is assumed that the nominal gate voltage (Vg) of the transfer gate is 3.3V and nominal oxide thickness (Tox) of pass gate
100
is 6.2 mn, applicable to case (a). The reliability parameter is defined as the electric field applied to the oxide of pass gate
100
, i.e., Vg/Tox=5.3 MV/cm. However, because of process deviations, Tox may vary between 6.5 nm and 5.8 mn, and Vg may vary depending on the chip between 3.15V and 3.45V, as shown in cases (b) and (c). In case (b), the resistance R of pass gate
100
is increased, because Vg is lower and Tox is thicker than the nominal case (a). This, in turn, reduces the transfer speed &Dgr;t by 2 ns. In case (c), t is improved by 2 ns, because of a lower R due to a higher Vg and a thinner Tox. However, the electric field oxide stress increases to 6.1 MV/cm, which is likely to destroy pass gate
100
during burn-in when the Vg voltage is boosted to 1.5×Vg for acceleration of the stress at high temperatures. It is desirable to allow the nominal value of Vg to shift towards a greater performance, as in case (b), if the test methodology were to capture the chips at a high reliability. Similarly, it is also preferable to shift towards greater reliability, as in case (c), if the test methodology were to capture those chips at a lower reliability.
Ideally, it is highly advantageous to dynamically adjust operating conditions during burn-in. Present burn-in setups are not capable of providing feedback from existing burn-in conditions and from chip operation. Such a situation requires to be supported by appropriate on-chip circuitry to analyze the chip response during burn-in test in real time.
OBJECTS OF THE INVENTION
Accordingly, it is an object of the invention to provide a method for providing a self-adjusting burn-in test to a device-under-test (DUT) and, more particularly, memory chips.
It is another object to self-adjust critical burn-in test parameters, such as Vg, etc., to dynamically modify the test conditions to avoid over and under burn-in.
It is yet another object to provide an apparatus which can monitor and self-adjust critical device parameters during the course of burn-in.
It is still another object to improve the reliability of chips being shipped by eliminating weak chips likely to fail early in their lifetime.
It is a further object to improve reliability by using lower internal voltages for better burn-in.


REFERENCES:
patent: 5030905 (1991-07-01), Figal
patent: 5420513 (1995-05-01), Kimura
patent: 5907492 (1999-05-01), Akram et al.
patent: 6175812 (2001-01-01), Boyington et al.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Self-adjusting burn-in test does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Self-adjusting burn-in test, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Self-adjusting burn-in test will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2570035

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.