Method for monitoring information density and compressing...

Coded data generation or conversion – Digital code to digital code converters – To or from bit count codes

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C341S051000

Reexamination Certificate

active

06531971

ABSTRACT:

STATEMENT REGARDING FEDERALLY SPONSORED RESEARCH OR DEVELOPMENT
Not Applicable.
REFERENCE TO A MICROFICHE APPENDIX
Not Applicable.
BACKGROUND OF THE INVENTION
This invention relates to digital data processing systems which compress and decompress digitized analogue signals, such as signals from microphones or other analogue measurement devises. The invention also relates to data processing systems which analyze and monitor digitized analogue signals for diagnostic and display purposes.
Historic Background Of fundamental importance for the digital processing of analogue signals is the so-called Shannon sampling theorem. It was introduced into information theory by C.E. Shannon in the 1940's. The theorem had been known already to Borel in 1898, according to R. J. Marks II,
Introduction to Shannon Sampling and Interpolation Theory,
Springer, New York, 1991.
The sampling theorem states that in order to record an analogue signal (such as a signal from a microphone) it is in fact not necessary to record the signal's amplitude continuously. Namely, if the amplitude of the signal is recorded only at sufficiently tightly spaced discrete points in time then from these data the signal's amplitude can be reconstructed at all points in time. To this end the spacing of the sampling points is sufficiently tight if it is smaller than half the period of the highest frequency component which possesses a substantial presence in the signal. It is important to note that for Shannon sampling the spacing of the sampling times must be equidistant.
To be precise (see e.g. the text by Marks mentioned above), the reconstruction of the signal from its discrete samples works as follows: Let us denote the maximal frequency in the signal by &ohgr;
max
. Let us further denote the amplitude of the signal at time t by ƒ(t). Assume that a machine measured and recorded the amplitudes ƒ(t
i
) of the signal at equidistantly spaced points in time, t
i
, whose spacing &Dgr;t=t
i+1
−t
i
is sufficiently small,
i
.
e
.


Δ



t
<
1
2

ω
max
.
Then, the amplitude of the analogue signal ƒ(t) at any arbitrary time t can be calculated from the measured values ƒ(t
n
) in this way:
f

(
t
)
=

n

G

(
t
,
t
n
)

f

(
t
n
)
(
1
)
Here, G(t, t
n
) is the so-called “cardinal series reconstruction kernel”, or “sampling kernel”:
G

(
t
,
t
n
)
=
sin

[
2

π

(
t
-
t
n
)

ω
max
]
2

π

(
t
-
t
n
)

ω
max
(
2
)
This method of reconstructing an analogue signal's amplitude at arbitrary times from only its discretely taken samples can easily be implemented on computers—and it is of course in ubiquitous use.
Shannon Sampling Is Not Optimally Efficient
While this method, “Shannon sampling”, has been of enormous practical importance, it is clearly not efficient:
When using the Shannon sampling method the highest frequency that is present in the signal determines the rate at which all samples must be taken. Namely, the larger the highest frequency in the signal the more samples must be taken per unit time. This means, in particular, that even if high frequencies occur in a signal only for short durations one must nevertheless sample the entire signal at a high sampling rate.
In practise, it is clear that the “frequency content”, or “bandwidth”, or “information density” of common analogue signals is not constant in time and that high frequencies are present often only for short durations. Therefore, it should normally be possible to suitably lower the sampling rate whenever a signal's information density is low and to take samples at a high rate only whenever the signal's information density is high. The Shannon sampling method, however, does not allow one to adjust the sampling rate: Shannon sampling is wasteful in that it requires one to first determine the highest overall frequency component in the signal and then, second, to maintain a correspondingly high constant sampling rate throughout the recording of the signal.
This shortcoming of Shannon sampling is important because the sampling rate of digitized analogue signals is usually the major limiting factor for the availability of network transmission speed and for computer memory capacity. Therefore, in order to use data memory and data transmission resources most efficiently, it is highly desirable to find ways to continuously adapt the sampling rate to the varying information density of the signal.
For this purpose, one needs, 1) methods and systems for measuring how a signal's information density varies in time so that one can adjust the sampling rate accordingly and, 2) methods and systems for reconstructing the signal from its so-taken samples.
Any method that allows one to sample and reconstruct analogue signals at continuously adjusted rates that are lower than the constant Shannon sampling rate amounts to a data compression method.
It would be desirable to be able to implement such a compression method purely digitally: An analogue signal that has been sampled conventionally, i.e. equidistantly (and therefore wastefully), is digitally analyized for its time-varying information density, then digitally resampled at a correspondingly time-varying sampling rate (using the cardinal series sampling formula of above), and is later decompressed by resampling it at a constant high sampling rate using a new sampling kernel that replaces the cardinal series sampling kernel and is appropriate for the case of the time-varying sampling rate.
It is clear that for such a data compression method to be most useful, the quality of the subsequent reconstruction of the signal should be controllable.
It is also clear that means or method for reliably measuring the time-varying information density of analogue signals can also be used for monitoring and displaying the information density of analogue signals. The ability to measure a time-varying characteristic of an analogue signal, such as here the signal's time-varying information density, can be of great practical value, e.g. for monitoring or diagnostic purposes, as will be explained further below.
The present invention provides corresponding methods and means.
Prior Art Techniques for Adaptive Sampling Rates
Much prior art has strived to achieve methods of sampling and reconstruction which use adaptively lower sampling rates:
Kitamura et al.
The system described by Kitamura, in U.S. Pat. No. 4,370,643 samples signals from analogue at a variable rate. The sampling rate is adjusted according to the momentary amount of change in the signal's amplitude. The reconstruction quality is not controlled. The system described by Kitamura et al., in U.S. Pat. No. 4,568,912 improves on this by reconstructing the signal as joined line segments. The inventors aim is data compression by adaptive rate sampling and also elimination of quantization noise. However, neither aim is satisfactorily achieved: Large amplitude changes of low bandwidth signals lead to inefficient oversampling rather than to the desired compression. Also, the quantization noise is not effectively eliminated since it tends to reappear in the form of jitter.
Kitamura et al., in U.S. Pat. No. 4,626,827, recognize deficiencies in their prior system. In their new system they determine the variable sampling rate by optionally either zero-crossing counting or by Fourier transforming the signal in blocks. The sampling rates are submultiples of the basic rate.
However, zero-crossing counting is a very unreliable indicator of a signal's minimum sampling rate: a signal can be very wiggly (and thus information rich) over long intervals without crossing zero at all.
The alternatively described method of establishing the minimum sampling rate by Fourier analysis of a block (or “interval”, or “slice”, or “period”) of the signal is also unreliable. There are two main reasons:
First, there is the well-known time-frequency uncertainty relation. Second, it is known that even low bandwidth signals ca

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Method for monitoring information density and compressing... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Method for monitoring information density and compressing..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method for monitoring information density and compressing... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3042820

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.