Calibrating a test system using unknown standards

Data processing: measuring – calibrating – or testing – Calibration or correction system – Sensor or transducer

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C702S085000

Reexamination Certificate

active

06643597

ABSTRACT:

TECHNICAL FIELD
The invention relates to electronic test equipment. In particular, the present invention relates to calibration of electronic test equipment systems such as vector network analyzers.
BACKGROUND ART
Test systems are critical to the manufacture and maintenance of modem electronic devices and systems. A variety of test systems are routinely employed such as scalar and vector network analyzers, spectrum analyzers, and power meters. Most of these systems provide for calibrating the test system. The calibration process attempts to mitigate or remove the effects of the test system imperfections from the measurements of a device under test (DUT). Typically, calibration involves using the test system to measure the performance of so-called calibration standards having known performance characteristics. The results of these measurements are then used to extract and remove measurement errors associated with the imperfections of the test system. To better understand the concept of calibration with respect to test systems consider a network analyzer, its error sources, and the calibration process used to remove the effects of the error sources.
A network analyzer characterizes the performance of RF and microwave devices under test (DUT) in terms of network scattering parameters. Scattering parameters, more commonly called ‘S-parameters’, are reflection and transmission coefficients computed from measurements of voltage waves incident upon and reflected from a port or ports of a DUT. In general, S-parameters are given either in terms of a magnitude and phase or in an equivalent form as a complex number having a real part and an imaginary part. A network analyzer capable of measuring both the phase and magnitude of the S-parameters of the DUT is called a vector network analyzer.
A vector network analyzer exhibits random errors and systematic errors during the measurement of a DUT. Random errors are primarily due to system noise sources including phase and amplitude noise of the stimulation source, receiver noise, and sampler noise. Random errors vary randomly as a function of time and, in most cases, cannot be removed by calibration, but may be minimized by averaging repeated measurements. Systematic errors are repeatable, non-random errors associated with imperfections in or non-ideal performance of the network analyzer and test setup being used. Moreover, systematic errors either do not vary with time or vary only slowly with time. Therefore, the effect of systematic errors on measured S-parameter data for a DUT can be minimized or eliminated through the use of network analyzer calibration. Essentially, network analyzer calibration involves determining correction factors or coefficients associated with an error model for the measurement system. Once determined, the correction factors are used to mathematically remove the effect of the systematic errors from the measured S-parameters for the DUT.
Six types of errors account for the major systematic error terms associated with a vector network analyzer measurement of the S-parameters. The six systematic errors are directivity and crosstalk related to signal leakage, source and load impedance mismatches related to reflections, and frequency response errors caused by reflection and transmission tracking within test receivers of the network analyzer. For a general two-port DUT, there are six forward-error terms and six reverse-error terms (six terms for each of the two ports of the DUT), for a total of twelve error terms. Therefore, a full measurement calibration for a general two-port DUT is often referred to as a twelve-term error correction or calibration.
Calibration standards are typically used in a measurement calibration to measure and quantify the error terms. Once determined, the error terms are used to compute correction factors or correction terms for use in the network analyzer calibration. A calibration standard is a precision device for which the S-parameters are known with sufficiently high accuracy to accomplish the calibration. That is, the accuracy of the calibration is directly related to the accuracy of the knowledge of the S-parameters of the calibration standard.
The known S-parameters of the calibration standard are used to compute a set of calibration coefficients that are incorporated into the network analyzer error model. Then, by making measurements of several different known calibration standards with the network analyzer it is possible to develop and solve a set of linear equations for a set of correction factors. The correction factors in conjunction with the calibration coefficients allow corrected S-parameter data to be reported by the ‘calibrated’ network analyzer. In general, as long as there are more equations (i.e. measurements of known calibration standards) than there are unknown error terms in the error model, the correction factors associated with the error terms can be determined uniquely. For example in the case of a twelve-term correction for a two-port DUT, four calibration standards consisting of a short circuit (‘short’), an open circuit (‘open’), a load, and a through (‘thru’) can be used to completely and uniquely determine the correction factors associated with each of the terms of the twelve-term error model. The use of short, open, load and thru standards is referred to as a SOLT calibration set. Another example of a popular calibration model used to develop a twelve-term correction is a thru-reflect-line (TRL) calibration.
Unfortunately, it is not always convenient or even possible to construct a set of calibration standards, the S-parameters of which are known with sufficient accuracy over a frequency range of interest for calibration purposes. An example of a such situation where constructing known calibration standards is difficult is testing a DUT that must be mounted in a test fixture as opposed to being connected directly to a coaxial cable or cables attached to the network analyzer. In addition to the problem of constructing and measuring calibration standards for these so-called ‘in-fixture’ measurements, repeatability of the calibration can also be a concern since it may not be possible to insert calibration standards into the fixture in a manner that is sufficiently repeatable, leading to unaccounted for and thus uncalibrated errors in the measurements.
Accordingly, it would be advantageous to calibrate a test system without relying on using known calibration standards. Furthermore, it would be desirable for such a calibration to enable the testing of a DUT in a test fixture without concern for the repeatability of calibration standard insertion. Such a calibration would solve a long-standing need in the area of calibrated test systems using calibration standards.
SUMMARY OF THE INVENTION
The present invention is a method and system for calibrating test equipment using a standards-based calibration that facilitates accurate measurements, and a vector network analyzer employing a standards-based calibration. The present invention works well even when a test fixture is used with the test equipment, where the test fixture facilitates ‘in-fixture’ measurements on a device under test (DUT). The method of calibrating is a ‘standards-based’ calibration method in which a set of calibration standards, wherein at least one calibration standard of the set has an unknown performance, is used. The system and vector network analyzer of the present invention also are based on this standards-based calibration. The present invention utilizes measurements of calibration standards to correct imperfections in measurements of the DUT performance due to the test system. The present invention further can correct for the effects of the test fixture in DUT measurements. According to the present invention, simulation models of the unknown calibration standards are selected and actual measurements of the unknown calibration standards are used to extract parameter values for constituent elements of the models. Once the element parameter values are extracted, the parameterized models provide an

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Calibrating a test system using unknown standards does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Calibrating a test system using unknown standards, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Calibrating a test system using unknown standards will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3129836

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.