Coded data generation or conversion – Digital code to digital code converters – To or from run length limited codes
Reexamination Certificate
2000-04-11
2002-04-16
Phan, Trong (Department: 2818)
Coded data generation or conversion
Digital code to digital code converters
To or from run length limited codes
Reexamination Certificate
active
06373408
ABSTRACT:
BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates to arithmetic encoding and arithmetic decoding of image or data, in particular, for example, relates to an encoding apparatus, a decoding apparatus, an encoding/decoding apparatus, an encoding method and a decoding method.
2. Description of the Related Art
FIG. 39
is an explanatory drawing of a concept of arithmetic encoding.
In
FIG. 39
, an effective region on a number line is divided into two partial region as an MPS (more probable symbol) partial region and an LPS (less probable symbol) partial region. Encoding is implemented by updating the effective region with one of the partial regions corresponding to a new symbol. However, the more a region range decreases, the more the numbers of effective bits of a lower limit value and an upper limit value increase as shown by binary decimals (fixed decimal) in
FIG. 39
, so that a register is required to have a certain precision for an operation.
In the description of the present application, a term “effective region” means a range (region) specified on a number line which may include arithmetic codes (coordinates) generated by dividing or updating data according to repeatedly encoding. Once the effective region is updated, the generated arithmetic code (coordinate) is never mapped within the range (region) on the number line other than the updated effective region.
In this case, even if the number of the effective digits increases, the high order digits seldom change. Considering this, as shown in
FIG. 40
, a renormalization is implemented to keep the decimal part to have the precision of dividing the region (in case of
FIG. 40
, 3 digits) and to flush the high order digits to integral part.
In
FIG. 40
, the effective region range A is made at least 4 corresponding to ½ (a half of the effective region range) and at most 8 (only initial value is 8) corresponding to 1 (a whole size of the effective region range) by extending to power of 2 times by the renormalization process. When the lower limit value (code) is assumed to be C, the effective region can be expressed by at least C and less than C+A.
First Related Art
An encoder and a decoder using arithmetic encoding and decoding can be embodied using the tables and the processing flows described in the International Standard Recommendation T. 82 of ITU-T (International Telecommunication Union). In the following explanation, the arithmetic encoding is referred to as a QM-Coder, the encoder is as an encoding section
1
A and the decoder is a decoding section
16
A.
FIGS. 41 and 42
show general configurations of the encoder and the decoder.
In
FIG. 41
, the encoding section
1
A receives two inputs, one of which is a CX (context)
2
and the other is an encoding PIX pixel)
3
. The encoding section
1
A outputs a code (code)
4
A. In
FIG. 42
, the decoding section
16
A receives two inputs, one of which is a CX (context)
2
and the other is a code
4
A. The decoding section
16
A outputs the PIX
3
decoded.
Data memory
5
accumulates images
6
. The data memory
5
generates the context CX
2
(10 bits, a total of 1024 patterns), which is a common reference pattern of 10 adjacent pixels, already encoded/decoded, indicated by Model Template for an encoding/decoding pixel. The data memory
5
also outputs the encoding pixel at the time of encoding, and accumulates decoded pixels at the time of decoding.
The QM-Coder selects either of 2-line and 3-line standard model templates for generating the context CX
2
as shown in FIG.
43
and informs of the selected standard model templates from the encoder to the decoder using a predetermined header information.
In the QM-Coder, a prediction matching probability of pixel value is estimated for each context of each encoding/decoding pixel, and encoding/decoding is performed by learning the change of the prediction matching probability of pixel value. Learning is implemented by rewriting a learning memory consisting of two variable tables having indexes of contexts. One of the two variable tables is an MPS table
7
storing a pixel value MPS (More Probable Symbol), which has a high probability of occurrence, as a prediction value of one bit. (A pixel having a less probability of occurrence is called LPS (Less Probable Symbol)). The other of the two variable tables is an ST table
8
storing a state number (0-112) of 7 bits. This state number is for classifying the state of matching probability of the prediction value into total of 113 states. Initial values for the MPS table
7
and the ST table
8
are all “0”.
The QM-Coder includes a probability estimation table consisting of four constant tables for referring state number (state) on encoding/decoding as an index other than the above two variable tables of the learning memory.
The four constant tables are an LSZ table
9
storing LSZ value which shows an LPS region range by 16 bits, an NMPS table
10
showing a next state of MPS transition by 7 bits, an NLPS table
11
showing a next state of LPS transition by 7 bits, and an SWTCH table
12
showing an inversion of prediction value by one bit.
FIG. 44
shows values set in the constant tables (These names expressed by capital alphabet letters for variable and constant tables will be used as array names in processing flow explained below.)
The LSZ table
9
is referred to by an operating unit of an arithmetic encoder
13
A/an arithmetic decoder
17
A and is not directly related to learning of adaptive prediction. In the arithmetic encoder
13
A/the arithmetic decoder
17
A, a calculation is operated using the LSZ value of the LSZ table
9
, and when an operation precision is reduced, the operation is renormalized. When the renormalization occurs, learning is instructed at the same time.
On encoding, the pixel-to-symbol converter
15
detects match/mismatch between the MPS value supplied from the MPS table
7
using the context CX
2
as an index and the pixel value PIX
3
. The pixel-to-symbol converter
15
outputs binary symbol
14
which is a result of the detection of match/mismatch to the arithmetic encoder
13
A, the learning memory (the MPS table
7
and the ST table
8
) and leaning is performed by the learning instruction.
On decoding, the symbol-to-pixel converter
18
converts the symbol (the MPS or the LPS) into the pixel and outputs the obtained pixel to the data memory
5
based on the binary symbol
14
decoded by the arithmetic decoder
17
A showing match/mismatch between the MPS value supplied from the MPS table using the CX
2
as the index and the pixel PIX
3
to be decoded. When the MPS value and the PIX
3
match, the symbol-to-pixel converter
18
outputs the MPS value to the data memory
5
as the PIX
3
. On the contrary, the symbol-to-pixel converter
18
outputs the LPS value (=1−MPS) when the MPS value and the PIX
3
do not match. The binary symbol
14
is also supplied to the learning memory (the MPS table
7
and the ST table
8
) and learning is performed by the learning instruction.
When the learning is instructed, if the encoding/decoding binary symbol
14
is MPS, the value of the NMPS table
10
is written in the ST table
8
. If the encoding/decoding binary symbol
14
is LPS, the value of the NLPS table
11
is written in the ST table
8
. The state transition is thus performed. When learning is performed by LPS, if the prediction matching probability is ½, the MPS value is inverted (operation “1−MPS”) and the inverted value is written in the MPS table
7
. It is detected whether the prediction matching probability is ½ or not using the SWTCH value as a flag.
In the arithmetic encoder
13
A, encoding operation is implemented by a C register
30
A showing a code (a lower limit value of the region) and an A register
31
showing a region range using inputs of the LSZ value
9
and the binary symbol
14
. The CT counter
50
controls output timing of the code of byte unit. Undetermined code having probability of carry-over is waiting at a BUFFER
51
and an SC counter until
Kimura Tomohiro
Ono Fumitaka
Yoshida Masayuki
LandOfFree
Encoding apparatus, decoding apparatus, encoding/decoding... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Encoding apparatus, decoding apparatus, encoding/decoding..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Encoding apparatus, decoding apparatus, encoding/decoding... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-2908410