Encoding system and encoding method

Pulse or digital communications – Bandwidth reduction or expansion – Television or motion video signal

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C375S240070

Reexamination Certificate

active

06792046

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Field of the Invention
This invention relates to an encoding system and an encoding method for a mobile telephone or a TV telephone system, for example, to encode video signals in real time.
2. Description of the Related Art
FIG. 1
is a block diagram of an encoding system of the background art, as disclosed on pp. 39 to pp. 40 of “All of MPEG-4” (Association of Industrial Search), for example;
FIG. 2
is an explanatory view showing an input signal of this encoding system of the background art;
FIGS. 3
a
to
3
d
are explanatory diagrams showing constructions of bitstreams; and
FIG. 4
is an explanatory diagram showing positions (arrangements) on a screen (in a displayed state) of a video packet.
In
FIG. 1
, reference numeral
1
designates a subtracter for receiving an external input signal (e.g., a luminance signal and two chrominance signals in the shown example) as its first input. The output of the subtracter
1
is inputted through DCT (Discrete Cosine Transformer)
2
and quantizer
3
to a DC/AC predictor
4
for predicting the quantized values of a DC component and an AC component, and dequantizer
6
. The output of the DC/AC predictor
4
is fed to the first input of variable length coder
5
, which outputs a bitstream.
On the other hand, the output of the dequantizer
6
, to which the output of the quantizer
3
is inputted, is fed through IDCT
7
(IDCT: Inverse DCT) to the first input of an adder
8
. The output of this adder
8
is fed to a memory
9
, the output of which is fed to the first input of predicted image former
10
and the first output of a motion detector
11
.
An external input signal is fed to the second input of the motion detector
11
, the output of which is fed to the second input of the predicted image former
10
and a motion vector predictor
12
.
The output of the motion vector predictor
12
is fed to the second input of variable length coder
5
. On the other hand, the output of the predicted image former
10
is fed to the second input of the subtracter
1
and the second input of the adder
8
.
Here will be described the operations. First of all, the video signals are divided into macroblocks or basic processing units, as shown in
FIG. 2
, and are inputted as external input signals (where the external input signals are basically inputted as the macroblocks, which may be directly inputted or may be converted thereinto by preprocessing unit for generating the macroblocks).
Where the video signals inputted are 4:2:0, 16 pixels×16 lines of a luminance signal (Y) are as large in the screen as 8 pixels×8 lines of two chrominance signals (Cb, Cr). Therefore, six blocks (i.e., four blocks for the luminance signal and two blocks for the chrominance signals) of 8 pixels×8 lines construct one macroblock.
Here, it is premised that the video object plane (VOP: a unit image) to be inputted as an external input has a rectangular shape identical to the frame.
Each block is quantized in the quantizer
3
after subjected to the discrete cosine transform (DCT). The DCT coefficients thus quantized are transformed together with additional informations such as a quantizing parameter into the variable length codes after the coefficients of the individual DC and AC components were predicted in the DC/AC predictor
4
.
This is the intra coding (as also called the “in-frame encoding”). The VOP for coding all the macroblocks as intra coding will be called the “I-VOP (Intra-VOP)”.
On the other hand, the quantized DCT coefficients are dequantized in the dequantizer
6
and are decoded by the IDCT
7
so that the decoded image is stored in the memory
9
. The decoded signal stored in this memory
9
is used at an inter coding (which may be called the “inter-frame encoding”).
In the inter coding case, the motion detector
11
detects the motion vectors indicating the motions of the macroblocks which are inputted as the external input signals. This motion vector indicates such a position in the decoded image stored in the memory
9
as takes the minimum difference from the macroblock inputted.
The predicted image former
10
forms a predicted image on the basis of the motion vector which is detected in the motion detector
11
.
Subsequently, a differential signal is determined between the macroblock inputted and the predicted image formed in the predicted image former
10
and is subjected to the DCT in the DCT
2
so that it is quantized in the quantizer
3
.
The DCT coefficients quantized are converted together with the additional information such as the predicted motion vector and the quantized parameter into the variable length codes. On the other hand, the quantized DCT coefficients are dequantized in the dequantizer
6
and subjected to the IDCT in the IDCT
7
. The output of the IDCT
7
is added to the predicted image by the adder
8
so that the sum is stored in the memory
9
.
For the inter coding, there are two types of prediction. One type is a forward prediction, which is made in the display order of the images only from the VOP preceding in time, and the other type is a bidirectional prediction, which is made from both the preceding VOP and the succeeding VOP. The VOP to be encoded by the forward prediction is called the “P-VOP (Predictive VOP), and the VOP to be encoded by the bidirectional prediction is called the “B-VOP (Bidirectionally Predictive VOP)”.
With reference to
FIG. 3
, here will be described the construction of the bitstream to be outputted from the variable length coder
5
. The bitstream of 1 VOP is constructed of one or more video packets, as shown in
FIG. 3
a.
Here, one video packet is composed of encoding data of one or more macroblocks, and the first video packet of the VOP is assigned the VOP header to its head and stuffing bits for byte alignment to its tail (as shown in
FIG. 3
b
).
The second and subsequent video packets are assigned a Resync Marker for detecting the leading video packet and the video packet header to its head and the stuffing bits to its tail (as shown in
FIG. 3
c
).
Here, the stuffing bits are added at the unit of 1 to 8 bits to the terminal end (cut) of the video packet for adjusting the byte alignment, and is discriminated in its meaning from the stuffing data, as will be described in the following.
On the other hand, the stuffing data can be introduced in an arbitrary number into the video packet, as shown in
FIG. 3
d
. In the case of MPEG4 Video, for example, the stuffing data is called the “stuffing macroblock”, which can be introduced like the macroblock into an arbitrary video packet. This stuffing data is abandoned (not substantially used) on the side of the decoding system.
The stuffing data, as defined herein, is used as words of 9 bits or 10 bits for increasing the number of bits but. independently of the byte alignment (for adjusting the terminal. end of the video packet, for example) and is used between the macroblocks so that its meaning is discriminated from the aforementioned stuffing bits.
The number of macroblocks to be inserted into one video packet is arbitrary but may be generally so constructed, if an error propagation is considered, that each video packet may have a substantially constant number of bits. Where the number of bits in the video packet is thus substantially constant, the area to be occupied by each video packet in one VOP is not constant, as shown in FIG.
4
.
With reference to
FIG. 5
, here will be detailed the operations of the DC/AC predictor
4
(i.e., on the luminance signal Y-component of the macroblock).
As described above, the DC/AC predictor
4
predicts the coefficients of the DC component and the AC components of the quantized DCT coefficients which are outputted from the quantizer
3
in the intra coding case. In the inter coding case, the DC component and the AC components are not predicted, but the quantized DCT coefficients, as outputted from the quantizer
3
, are outputted as they are to the variable length coder
5
. In this case, the luminance signal Y and the chrominance signals Cb, Cr are separ

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Encoding system and encoding method does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Encoding system and encoding method, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Encoding system and encoding method will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3240058

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.