Measurement scale for non-uniform data sampling in N dimensions

Electrical computers: arithmetic processing and calculating – Electrical digital calculating computer – Particular function performed

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C708S403000, C708S405000, C341S123000

Reexamination Certificate

active

06477553

ABSTRACT:

FIELD OF THE INVENTION
This invention relates to methods for sampling objects. In particular it provides a system implementing a rigorous mathematical framework and provably correct practical algorithmic implementations for digitally recording a signal having any number of degrees of freedom.
BACKGROUND OF THE INVENTION
A physical or virtual object is recorded for application specific analysis by sampling or measuring it at various points. An object might be a one dimensional electrical signal, price history or other time series, a two dimensional image, a three dimensional vector field, or an N dimensional surface in, say, configuration space, momentum space, or phase space, all useful in physics. One important example is the current use of uniformly spaced samples of signals or images to analyze these objects for subsequent processing like filtering or convolution. This analysis involves calculating approximations to the Fourier transform, wavelet transform or other related transforms. The aggregate of sample points generated by the measuring process creates a mosaic from the original physical object. The degree of congruence of the mosaic to the original depends on the number of sample points used as well as on the judicious location of the sample points.
A classical example of such is recording of an audio signal for replay by sampling the signal amplitude at uniform time intervals less than half the period of the highest frequency present in the signal. The so-called Nyquist theorem assures that such a recording enables reconstruction of the original signal. If the uniform sampling occurred at a lesser rate, in many cases an attempt to replay the sampled signal would result in a distortion termed “artifacting” in which harmonics of the signal would not correctly reproduce.
Limited resources place constraints upon the generation of sample points. Therefore sampling is often a fairly expensive operation in terms of system resources consumed. Depending on the application, the number of sample points generated might be constrained by the resolution of the measuring equipment used (cost constraint), by the amount of storage available to save the samples (space constraint) as well as by the observation time window of opportunity to obtain the samples (time constraint). Thus, a set of samples can be regarded as a somewhat scarce resource. Therefore, each sample point should have impact and provide a maximal contribution of information content to the whole mosaic.
Since there is a cost associated with the generation of each sample point, some optimization of sampling is generally required. Particularly in the case of objects having periodically recurring patterns there is much redundancy in samples taken in uniform intervals. Or, more often, an object might have mostly unchanging features with little useful information content. But, say 10% of it contains volatile features of great interest. This has led to the use of non-uniformly spaced, or irregularly patterned, data samples.
For instance, a signal might exhibit rapid change (or volatility) in one concentrated region, but otherwise be fairly smooth. Or, an image might exhibit sharp edges as well as regions of unchanging, or predictable, patterns. In both cases, many sample points are required in certain concentrated intervals or areas where much information can be gleaned from the complex changes occurring there, while few sample points are needed elsewhere where little useful information can be extracted.
The number of sample points required using the standard uniformly spaced samples rises dramatically as the number of dimensions increases. Thus if a one dimensional signal requires O(N) samples, then a two dimensional image would need O(N
2
) samples, a three dimensional O(N
3
) 3 samples and so on. (A quantity Q(x) is said to be of order N
k
, written O(N
k
), if Q(x)/x
k
is bounded.
What is required is a method to specify the manner of sampling an object wherein each the totality of samples provides comprehensive and non-redundant information about the object, while minimizing the number of samples required, especially as the dimension of the sampled object increases.
The following methods are generally known in the prior art:
Accept the limitations of uniform sampling and utilize Fourier Transform approximation techniques like the Discrete Fourier transform and its FFT implementation;
Approximate, generally via interpolation, non-uniformly spaced samples by uniformly spaced samples of sufficiently high resolution in critical regions tailored to the particular object;
Using pseudo-random numbers, generate an irregular scale for measurements. A Monte Carlo method is typically used to implement this;
Use multi-resolution wavelets; and
Adaptively sample data.
The Fourier Transform plays a dual role in N dimensional data recording. First consider a one dimensional signal representing time variations of some variable of interest, say voltage or price. The Fourier expansion of a signal is capable of representing an analog signal by a sequence of complex numbered Fourier coefficients, the amplitudes and phases of the Fourier components, whereby the content of a signal with a bounded frequency spectrum is accurately represented by a discrete data series. This itself results in effective data compression, especially for signal having significant periodic components. Secondly, the determination of the Fourier coefficients of the signal is determined by sampling of the signal, which brings one back to the need for the present invention.
The standard technique is to employ a Discrete Fourier Transform (DFT) as a substitute for the Fourier Transform. Furthermore the FFT or Fast Fourier Transform has been developed, which is an efficient implementation of the DFT. The DFT, therefore is an approximation that inherently distorts the spectrum generated by the Fourier transform. Nevertheless, it is tolerated as the most commonly used digital and image processing technique because it is analytically tractable and readily implemented on diverse hardware/software platforms.
The approximation, generally via interpolation, of non-uniformly spaced samples by uniformly spaced samples of sufficiently high resolution also distorts the calculated spectrum. But it is also potentially wasteful of sample points since it requires uniformly spaced samples as anchors around which irregularly spaced samples are interpolated. It is also difficult to implement its mathematics. Indeed, the overwhelming general practice is still to use the first approach.
The use of pseudo-random numbers is analytically or mathematically difficult to describe, if not pragmatically intractable. That is, it is not readily describable mathematically, say as a set of equations. A mathematical description is generally needed in practice, to develop algorithms that automatically map sample points to the irregularly spaced random points. It is also computationally complex because the deltas, or differences between each sample point and neighboring random points, generally must be recalculated because the random points used might not be reproducible.
Another, relatively new, general technique is multi-resolution analysis. This involves splitting the stream of possibly non-uniformly spaced data into streams of different resolution. In some circumstances, this technique is also referred to as “decimation”. Wavelets also use a multi-resolution analysis. The use of multi-resolution wavelets tries to solve the problem of mathematically decomposing non-stationary or highly irregular signals and images, into simpler components on multi-tiered scales. But the approach is generally difficult to implement in practice. Although there have been claims for very specific applications, its effectiveness vis-a-via standard DFT based techniques for general applications is not apparent. Indeed, the 1965 Cooley-Tukey FFT algorithm immediately paved the way for the revolution in medical imaging that accelerated in the 1970s. Such a similar impact is not apparent, even more than 13 years after the framew

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Measurement scale for non-uniform data sampling in N dimensions does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Measurement scale for non-uniform data sampling in N dimensions, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Measurement scale for non-uniform data sampling in N dimensions will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2976759

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.