Method of classifying statistical dependency of a measurable...

Data processing: measuring – calibrating – or testing – Measurement system – Measured signal processing

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C706S031000

Reexamination Certificate

active

06363333

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Field of the Invention
The invention is directed to a method for the classification of the statistical dependency of a measurable, first time series that comprises a prescribable plurality of samples, particularly of an electrical signal, with a computer.
2. Description of the Invention
The analysis of dynamic systems in view of a classification of the statistical dependency for the prediction of the curve or an arbitrary measured signal is a motivation for versatile applications.
A given measured signal x can be sampled with the step width w (see FIG.
1
). This signal that seems arbitrary contains linear and non-linear statistical dependencies that are analyzed dependent on a specific plurality of values v in the past, and the acquired information is utilized to predict a plurality of values z in the future.
SUMMARY OR THE INVENTION
An object of the present invention is to provide a method in order to classify the information flow of an arbitrary measured signal according to a statistical dependency with a computer, taking real disturbing factors such as, for example, noise into consideration.
The invention presents a suitable method for the classification of the information flow of a dynamic system. The probability density of a measured signal is modelled by a neural network for this purpose. Predictions with respect to the curve of a signal composed of v past values can be made for z future values with this trained neural network. The extent to which these predictions are accurate within a range to be defined can be improved by raising the order, i.e. the plurality of steps v in the past. The precision of the prediction fluctuates within a variance (Gaussian distribution or sum of Gaussian distributions) lying around a mean value.
The prediction probability is modelled by probability densities of a dynamic system. The dynamic system can be established by an arbitrary measured signal. A non-linear Markov process of the order m proves suitable for describing conditioned probability densities. The non-linear Markov process is thus modelled by a neural network so that, dependent on the order m of the non-linear Markov process, a prediction for r steps into the future can be made with the assistance of the neural network. The order m of the non-linear Markov process thereby corresponds to the plurality of values from the past that are taken into consideration in the conditioned probability densities. The predicted value for a respective step r lies in the region around a statistical mean established by a variance.
The neural network can be trained in that the product of the probabilities is maximized according to the maximum likelihood principle. m+1 values of the dynamic system that is to be modelled are required as input for training the neural network, whereby m represents the order of the non-linear Markov process. The prediction probabilities corresponding to the non-linear Markov process are trained.
An arbitrary number of surrogates that represent a second time series can be determined with the trained neural network as described by Theiler et al., Physica D 58, p.77 (1982). A criterion d(r) for statistical dependency is calculated for the classification of the second time series, whereby r defines the plurality of steps into the future. The calculation of d(r) ensues both for the first time series as well as for the second time series. The difference between the criterion d(r) corresponding to the first time series and the criterion d(r) corresponding to the second time series explains the extent to which the second time series produced by the neural network agrees with the first time series. A number r of future steps are thereby considered in order to be able to make a more exact statement about the quality of the prediction or the quality of the coincidence.
An advantageous configuration of the inventive method lies in the employment of a time series having a Gaussian distribution instead of the first time series that describes the dynamic system. The Gaussian time series is obtained in that, corresponding to the plurality N of values of the first time series, random numbers are determined from a Gaussian distribution, i.e. around a mean with a variance to be determined. These N random numbers are sorted according to the rank of the first time series. A time series having a Gaussian distribution is thus obtained. Compared to the employment of the first time series, the employment of the modified time series having a Gaussian distribution has the advantage that the samples of the modified Gaussian time series are normalized as non-linearities, which could have been caused by the measuring apparatus in the registration of the samples, are correspondingly attenuated within the normed range by a Gaussian probability density distribution.
If the first time series has not been classified precisely enough, then it can be advantageous to implement an iteration with a non-linear Markov process of a higher order in order to obtain a more precise predictability of the second time series. Iterative tests with various hypotheses thus become possible.


REFERENCES:
patent: 5417211 (1995-05-01), Abraham-Fuchs et al.
patent: 5822742 (1998-10-01), Alkon et al.
patent: 5938594 (1999-08-01), Poon et al.
“Nonparametric Data Selection for Improvement of Parametric Neural Learning: A Cumulant-Surrogate Method,” Deco et al., ICANN 96, Jul. 16, 1996, pp. 121-126.
“Learning Time Series Evolution by Unsupervised Extraction of Correlations,” Deco et al., Physical Review E, Vol. 51, No. 3, Mar. 1995, pp. 1780-1790.
“Unsupervised Learning for Boltzmann Machines,” Deco et al., Network: Computation in Neural Systems, vol. 6, No. 3, Aug. 1, 1995, pp. 437-448.
“Testing for Nonlinearity in Time Series: The Method of Surrogate Data,” Theiler et al., Physica D, vol. 58 (1992), pp. 77-94.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Method of classifying statistical dependency of a measurable... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Method of classifying statistical dependency of a measurable..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method of classifying statistical dependency of a measurable... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2855914

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.