Encoding, decoding, and probability estimation method

Coded data generation or conversion – Digital code to digital code converters – To or from code based on probability

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C341S052000

Reexamination Certificate

active

06411231

ABSTRACT:

FIELD OF THE INVENTION
This invention relates to an encoder and a decoder for data signals and, more particularly, to entropy encoding and decoding.
BACKGROUND
Adaptive encoding is a known method of efficiently encoding data signals. An adaptive encoding device encodes a data signal while studying the occurrence probability of the object of encoding or decoding. Therefore, adaptive coding avoids decreased coding efficiency.
An adaptive encoding and de coding device is described in five articles concerning “Q-coder adaptive binary arithmetic coder”, and appearing in IBM Journal of Research and Development, Vol. 32, No. 6, Nov. 1998, pp. 717-774. In addition, the principle of an arithmetic coder and decoder having an entropy encoding and decoding means is described in U.S. Pat. No. 5,059,976.
FIG. 1
of that patent, reproduced here as
FIG. 12
, illustrates an example in which the binary symbol 001(sequence length 3) is encoded by an arithmetic coder. That encoding is described in the following paragraphs.
In coding a Markov information source, a number line representation coding system is used. In that system a sequence of symbols is mapped on number lines from 0.00 to 1.0 and having coordinates coded as code words which are, for example, represented in a binary expression.
FIG. 12
is a conceptual diagram of the number line representation system. For simplicity, a bi-level memoryless information source is shown. The occurrence probability for “1” is set at r and the occurrence probability for “0” is set at 1-r. When an output sequence length is set at 3, the coordinates of each of the rightmost C(000) to C(111), represented as a binary expression, is truncated at the digit that allows distinction from the other, and is defined as its respective code word. Decoding is possible at a receiving side by performing the same procedure as at the transmission side.
In such a sequence, the mapping interval A
i
, and the lower-end coordinates C
i
of the symbol sequence at time i are given as follows:
When the output symbol ai is 0 (More Probable Symbol: hereinafter called MPS),
A
i
=(1-r)A
i−1
and
C
i
=C
i−1
.
When the output symbol ai is 1 (Less Probable Symbol: hereinafter called LPS),
A
i
=rA
i−1
and
C
i
=C
i−1
+(1-r)A
i−1
.
As described in “An overview of the basic principles of the Q-Coder adaptive binary arithmetic coder”, IBM Journal of Research and Development, Vol. 32, No. 6, November 1988, pp. 717-736, in order to reduce the number of calculations, such as multiplication, a set of fixed values are prepared and a certain value is selected from among them, not necessarily calculating rA
i−1
.
That is, if rA
i−1
of the foregoing expression is set at S,
when ai=0,
A
i
=A
i−1
−S
C
i
=C
i−1
when ai=1,
A
i
=S
C
i
=C
i−1
+(A
i−1
−S)
However, as A
i−1
becomes successively smaller, S also needs to be smaller, in this instance. To maintain calculation accuracy, it is necessary to multiply A
i−1
by the second power (hereinafter called normalization). In an actual code word, the fixed value is assumed to be the same at all times and is multiplied by powers of ½ at the time of calculation (namely, shifted by a bit).
If a constant value is used for S, as described above, a problem arises when, in particular, S is large and a normalized A
i−1
is relatively small. An example follows.
If A
i−1
is slightly over 0.5, A
i
is very small when ai is an MPS, and is even smaller than the area given when ai is an LPS. That is, in spite of the fact that the occurrence probability of the MPS is high, the area allocated to the MPS is smaller than that allocated to the LPS, leading to an decrease in coding efficiency. If it is assumed that an area allocated to the MPS is always larger than that allocated to the LPS, since A
i−1
>0.5, S must be 0.25 or smaller. Therefore, when A
i−1
is 1.0, r=0.25, and when A
i−1
is close to 0.5, r=0.5, with the result that the occurrence probability of the LPS is considered to vary between ¼ and ½ during coding. If this variation can be made smaller, an area proportional to an occurrence probability can be allocated and an improvement in coding efficiency can be expected.
U.S. Pat. No. 5,025,258 describes a method of calculation of an occurrence probability (Qe) based on the number of times of occurrence. In order to presume the Qe of symbol 1, U.S. Pat. No. 5,059,976 uses learning in the probability estimation means, synchronized with renormalization in the entropy coding means, which is fundamentally independent of the probability estimation means. That is, the adaptability to a change of the information source depends on chance, as indicated in FIG.
13
.
Arithmetic coding and decoding are described in the following references:
(1) Langdon et al., “Compression of Black-White Images with Arithmetic coding”, IEEE Transactions, Vol. Com-29, No. 6, Jun. 1981, pp. 858-867,
(2) U.S. Pat. No. 4,633,490,
(3) Witten et al., “Arithmetic coding for Data Compression”, Communications of the ACM, Vol. 30, No. 6, June 1987, pp. 520-540.
FIG. 11
is a block diagram of an adaptive encoding device and an adaptive decoding device. In
FIG. 11
, a probability estimation means
30
presumes an occurrence probability of a data value for encoding, and produces a predicted value as a data value with a high occurrence probability. When a multi-value data (not binary data) signal is input, a modeling means
33
analyzes the input signal and classifies it as to context. In the coding device, the modeling means
33
converts the input multi-value data signal into a binary data signal.
In the decoding device, a modeling means
34
analyzes an output signal and classifies it as to context. When a multi-value data (not binary data) signal is output, a modeling means
34
converts the input binary data signal into a multi-value data signal.
In the coding device, a symbol judgment means
38
converts the input data signal into a binary symbol to show agreement or disagreement with the data value for encoding based on the binary data and a predicted value received from a part
36
of a memory as described below. An entropy encoding means
31
encodes the data value output by the symbol judgment means, based on the probability established separately and supplied from the Qe memory
37
described below.
In the decoding device, a data judgment means
39
converts a binary symbol received from an entropy decoding means
32
, into binary data based on the binary symbol and a predicted value received from a part
36
of a memory in the decoding device. The entropy decoding is based on the probability separately established and stored in a Qe memory in decoding device.
The structure of
FIG. 11
has separate modeling means
33
and modeling means
34
in the encoding and decoding devices. These modeling means may include generally known probability estimation means
30
including data and symbol conversion and inversion function. In the described structure, no conversion and inversion functions in the modeling means
33
and
34
are needed if the modeling means receive binary signals.
A state number is stored into a part
35
of a memory as an index for selecting an estimation probability value (MPS or LPS) for the Qe memory
37
. An arithmetic coder and an arithmetic decoder are included in the entropy encoder means
31
and the entropy decoder means
32
, respectively. In the encoding device of
FIG. 11
, the state number memory part
35
receives the context from the modeling means
33
. The memory part
36
stores a predicted value
36
, based on the context and state number. The Qe memory
37
detects a probability representation value (MPS or LPS). The symbol judgment means
38
produces a binary symbol to establish agreement or disagreement of the data value for encoding based on the binary data and the predicted value. The probability representation value

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Encoding, decoding, and probability estimation method does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Encoding, decoding, and probability estimation method, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Encoding, decoding, and probability estimation method will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2955854

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.