Testing method and apparatus assuring semiconductor device...

Error detection/correction and fault detection/recovery – Pulse or data error handling – Digital logic testing

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C714S721000, C714S734000, C714S738000, C324S765010, C716S030000

Reexamination Certificate

active

06574760

ABSTRACT:

FIELD OF THE INVENTION
The present invention is related in general to the field of semiconductor devices and testing and more specifically to a testing methodology assuring device quality and reliability without conventional burn-in while using a low-cost tester apparatus.
DESCRIPTION OF THE RELATED ART
W. Shockley, the inventor of the transistor and Nobel prize winner, demonstrated in the late '50s and early '60s the effect of fabrication process variations on semiconductor device performance; he specifically explored the dependence of the p-n junction breakdown voltage on local statistical variations of the space charge density; see W. Shockley, “Problems Related to p-n Junctions in Silicon”, Solid-State Electronics, vol. 2, pp. 35-67, 1961.
Since that time, numerous researchers have investigated semiconductor integrated circuit (IC) process steps to show that each process step has its design window, which in most cases follows a Gaussian bell-shaped distribution curve with unavoidable statistical tails. These researchers have illuminated how this statistical variation affects the performance characteristics of semiconductor devices, and how to keep the processes within a narrow window. The basis for determining the process windows was in most cases careful modeling of the process steps (such as ion implantation, diffusion, oxidation, metallization, junction behavior, effect of lattice defects and impurities, ionization, etc.); see for example reviews in F. van de Wiele et al., “Process and Device Modeling for Integrated Circuit Design”, NATO Advanced Study Institutes Series, Noordhoff, Leyden, 1977. Other modeling studies were addressing the simulation of circuits directly; see, for example, U.S. Pat. No. 4,744,084, issued May 10, 1988 (Beck et al., “Hardware Modeling System and Method for Simulating Portions of Electrical Circuits”).
Today, these relationships are well known to the circuit and device designers; they control how process windows have to be designed in order to achieve certain performance characteristics and device specifications. Based on these process parameters, computer simulations are at hand not only for specification limits, but within full process capability so that IC designs and layouts can be created. These “good” designs can be expected to result in “good” circuits whenever “good” processes are used in fabrication; device quality and reliability are high. Based on testing functional performance, computer-based methods have been proposed semiconductor device conformance to design requirements. See, for example, U.S. Pat. No. 5,668,745 issued Sep. 16, 1997 (Day, “Method and Apparatus for Testing Semiconductor Devices”).
However, when a process is executed during circuit manufacturing so that it deviates significantly from the center of the window, or when it is marginal, the resulting semiconductor device may originally still be within its range of electrical specifications, but may have questionable long-term reliability. How can this be determined? The traditional answer has been the so-called “burn-in” process. This process is intended to subject the semiconductor device to accelerating environmental conditions such that the device parameters would show within a few hundred hours what would happen in actual operation after about 2 years.
In typical dynamic burn-in, circuit states are exercised using stuck-fault vectors. The accelerating conditions include elevated temperature (about 140° C.) and elevated voltage (Vdd about 1.5×nominal); the initial burn-in is for 6 hr, the extended burn-in is 2 sets of 72 hr, with tests after each set. Since 6 hr burn-in is equivalent to 200 k power-on hours, device wearout appears early, the reliability bathtub curve is shortened, and the effect of defects such as particles will be noticed.
There are several types of defects in ICs, most of which are introduced during the manufacturing process flow. In the last three decades, these defects have been studied extensively; progress is, for example, reported periodically in the Annual Proceedings of the IEEE International Reliability Physics Symposium and in the reprints of the Tutorials of that Symposium.
In the so-called bathtub curve display, the number of failures is plotted versus time. The initial high number of failures is due to extrinsic failures, such as particulate contamination, and poor process margins. The number of failures drops sharply to the minimum of intrinsic failures and remains at this level for most of the device lifetime. After this instantaneous or inherent failure rate, the number of failures increases sharply due to wearout mortality (irreversible degradation such as metal electromigration, dielectric degradation, etc.).
Based on functional tests and non-random yields, automated methods have been proposed to analyze defects in IC manufacturing and distinguish between random defects and systematic defects. See, for example, U.S. Pat. No. 5,497,381, issued Mar. 5, 1996 (O'Donoghue et al., “Bitstream Defect Analysis Method for Integrated Circuits”).
For burn-in, the devices need facilities equipped with test sockets, electrical biasing, elevated temperature provision, and test equipment. Considering the large population of devices to be burned-in, the expense for burn-in is high (floor space, utilities, expensive high-speed testers for final device test, sockets, etc.). As an example of a proposal to avoid burn-in, see J. A. van der Pol et al., “Impact of Screening of Latent Defects at Electrical Test on the Yield-Reliability Relation and Application to Burn-in Elimination”, 36th Ann. Proc. IEEE IRPS, pp. 370-377, 1998. It is proposed that voltage stresses, distribution tests and Iddq screens are alternatives to burn-in, but the tests cover only device specification and are thus too limited and expensive.
An additional concern is the effect burn-in has on the devices which are subjected to this procedure. After the process, many survivors are “walking wounded” which means that their probable life span may have been shortened to an unknown degree.
In addition to the greatly increased cost for burn-in, the last decade has seen an enormous cost increase for automatic testing equipment. Modern high-speed testers for ICs cost in excess of $1 million, approaching $2 million. They also consume valuable floor space and require considerable installation (cooling) effort. These testers not only have to perform the traditional DC parametric device tests, but the ever more demanding functional and AC parametric tests. DC parametric tests measure leakage currents and compare input and output voltages, both of which require only modest financial investment. Functional tests are based on the test pattern of the device-to-be-tested, a tremendous task for the rapidly growing complexity of modern ICs. AC parametric tests measure speed, propagation delay, and signal rise and fall. These tests are combined to “at speed” functional tests. For the required timing control, calibration, and many patterns at high speed, the lion share of the financial investment is needed (between 80 and 95%). Included here are the pattern memory and timing for stimulus and response, format by combining timing and pattern memory, serial shift registers (scan), and pattern sequence controller.
Traditional automatic test equipment (ATE) incorporates expensive, high performance pattern memory subsystems to deliver complex test patterns during production test of digital ICs. These subsystems are designed to deliver wide patterns (typically 128 to 1024 bits) at high speeds (typically 20 to 100's MHz, more than 400 MHz on new devices). The depth of the pattern storage is typically 1 to 64 million. The width, speed and depth of the pattern memory requirements, along with the sequencing capability (loops, branches, etc.) combine to significantly affect the cost of the pattern subsystem, to the extent that most pattern subsystems represent a significant component of the overall ATE cost.
The traditional pattern memory subsystem limitations are often the sourc

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Testing method and apparatus assuring semiconductor device... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Testing method and apparatus assuring semiconductor device..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Testing method and apparatus assuring semiconductor device... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3116619

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.