Methods to significantly reduce the calibration cost of...

Data processing: measuring – calibrating – or testing – Measurement system in a specific environment – Chemical analysis

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C702S023000, C702S070000, C702S185000, C702S189000, C702S190000, C702S191000, C250S252100

Reexamination Certificate

active

06629041

ABSTRACT:

FEDERALLY SPONSORED RESEARCH—not applicable SEQUENCE LISTING OR PROGRAM—not applicable
DRAWINGS—not applicable
SEQUENCE LISTING—not applicable
REFERENCES CITED
1. R. Marbach and H. M. Heise,
Calibration Modeling by Partial Least
-Squares and Principal Component Regression and its Optimization Using an Improved Leverage Correction for Prediction Testing, Chemometrics and Intelligent Laboratory Systems 9, 45-63 (1990)
2. R. Marbach, On Wiener Filtering and the Physics Behind Statistical Modeling, Journal of Biomedical Optics 7, 130-147 (January 2002)
3. T. W. Anderson, Asymptotic theory for principal component analysis, Ann. Math. Statist. 34, 122-148 (1963)
FIELD OF THE INVENTION
The invention relates to methods for calibrating multichannel measurement instruments.
BACKGROUND OF THE INVENTION
Multichannel instruments are instruments that measure a set of multiple input signals first, and then use an algorithm to generate a desired single output number from the measured values. The input signals can be from one multidimensional measurement, e.g., light absorbance values at different optical wavelengths, or from several one-dimensional measurements, e.g., the input set of [temperature, pressure, humidity]; or from any combination of these two. Multichannel instruments are used in many different industries and for many different applications, under varying names. They may also be called, e.g., “multivariate” or “multidimensional” or “multiple parameter” or “multi-pixel” or “broad spectrum” or “large-area” etc. The common characteristic is that the instruments must be calibrated before use, i.e., an algorithm must be programmed into the instruments that translates the multiple measured numbers into the desired single output number. Calibration is the process of determining that algorithm.
Today's procedures for calibrating multichannel instruments are ineffective and inefficient. The prior art knows two different approaches, viz., the so-called physical and statistical calibration methods, which so far have been thought of as separate and not interrelated (a summary is given, e.g., in Reference [1]). The mathematical tools used in the two approaches are similar, because both are based on linear regression techniques, but the methods differ substantially in the type and amount of data that the user has to measure in order to perform the calibration. The physical method is relatively cheap and intuitively easy to understand, however, this method can only be applied in very simple measurement situations and the quality of the result is almost always inferior to the statistical result. The statistical method is generally preferred because, in principle, it works in all cases and it converges against a desired optimal result, however, it requires large amounts of calibration data and the cost associated with collecting that data is usually very high.
At first glance it appears that, since calibration methods have been in widespread use for several decades now, the process of calibration would be fully understood and hard to improve on any further. However, much to the surprise of this author himself, this is far from true and the sad reality is that large amounts of money are currently being wasted on ineffective and inefficient procedures to find good multichannel algorithms.
The majority of the procedures in use today are based on the statistical approach, which works as follows. During a dedicated calibration time period the instrument is used as intended in the later measurement application and a large number of data points are measured and saved into memory. If the instrument measures, say, k channels, then each data point consists of (k+1)-many numbers, namely the k channel readings plus the “true” value of the desired output. The “true” value is measured using an independent reference instrument that serves as a standard to the calibration. Eventually, after a sufficient number of data points have been collected, a hyper-plane is fitted through the data points using standard linear regression techniques and the parameters of the fit are used to program the algorithm. With the advent of the personal computer, activity on the subject has increased to vast proportions and has even spawned several new branches of science, e.g., “biometrics,” “econometrics,” or “chemometrics;” along with about a dozen new scientific journals, a dozen new research institutes, scores of university chairs, and thousands of professionals graduated in the field.
The statistical approach works but has significant disadvantages, including:
1. Calibration time periods are often excessively long in order to model all the noise that the instrument is likely to see in future use; this is especially true in high-precision applications where low-frequency noises with time constants on the order of days, weeks or even months must be modeled;
2. the calibration data set is often affected by a dangerous effect called “spurious correlation” (discussed below) which can render results useless and can be difficult to detect;
3. there is no way to effectively use a-priori knowledge about the physics of the measurement process to ease the task of calibration; instead, the calibration is purely “statistical” and always starts from scratch;
4. there is no way to effectively and quantitatively assess the effect of hardware or measurement process changes on the calibration; consequently, there is no quantitative feedback mechanism that would tell in advance, e.g., what the effect of a planned hardware change on the system performance would be; and there is also no way to easily “maintain” or update an existing calibration to slight changes in the instrument hardware or measurement process;
5. there is no way to easily “re-use” an existing calibration for a new but similar application;
6. there are severe marketing problems because the results of statistical calibration are hard to interpret which, in turn, makes end users reluctant to buy, and depend on, a machine the inner workings of which they do not fully understand.
The reason behind all of these problems is that there is currently no understanding about the relationship between the statistical calibration process and the underlying physics of the measurement problem. As a result, even in the best case when enough effort has been spent and the statistical method has actually converged against the desired optimum result, users are still left in a situation where there is always, a feeling of distrust against the solution because it is not physically understood (“Will it really continue to work in the future?”) and a feeling that one should have collected even more data (“Could I get better than this?”).
The reason for the widespread use of the statistical approach, in spite of all the problems listed above, is simply that for many measurement problems there is no alternative. Also, there is the generally accepted fact that, if enough calibration data can be collected, then the statistical method somehow converges against an optimal result that can not be outperformed by other solutions. In some simple measurement situations, users shy away from the statistical approach and instead apply the so-called physical approach to calibration. In this method, the user tries to identify the multidimensional fingerprints of each and every physical effect in the multidimensional data space, one effect at a time, and then reconstruct each measured set of input signals as a weighted sum of the individual effects. Unfortunately, the physical approach only works in very simple situations, and even then the results are inferior to those one could have gotten from the statistical approach. Worse, if the measurement problem is a complicated mixture of many physical effects (which is the principal reason why most users decide to do a multichannel measurement in the first place) the physical approach breaks down and does not work at all (see, e.g. Reference [1]).
SUMMARY
The new methods work, in short, by translating the difficult inverse-pr

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Methods to significantly reduce the calibration cost of... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Methods to significantly reduce the calibration cost of..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Methods to significantly reduce the calibration cost of... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3063325

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.