Method and apparatus for encoding enhancement and base layer...

Pulse or digital communications – Bandwidth reduction or expansion – Television or motion video signal

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

Reexamination Certificate

active

06173013

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates to an image signal encoding method and an image signal encoding apparatus, an image signal decoding method and an image signal decoding apparatus, an image signal transmission method, and a recording medium which are suitable for use in systems for recording a moving image signal on a recording medium such as an magneto-optical disk or a magnetic tape and reproducing the moving image signal from the recording medium thereby displaying the reproduced image on a display device, or systems, such as a video conference system, a video telephone system, broadcasting equipment, a multimedia database retrieving system, for transmitting a moving image signal via a transmission line from a transmitting end to a receiving end so that the transmitted moving image is displayed on a displaying device at the receiving end, and also systems for editing and recording a moving image signal.
2. Description of the Related Art
In the art of moving-image transmission systems such as video conference systems or video telephone systems, it is known to convert an image signal into a compressed code on the basis of line-to-line and/or frame-to-frame correlation of the image signal so as to use a transmission line in a highly efficient fashion.
The encoding technique according to the MPEG (Moving Picture Experts Group) standard can provide a high compression efficiency and is widely used. The MPEG technique is a hybrid technique of motion prediction encoding and DCT (discrete cosine transform) encoding techniques.
In the MPEG standard, several profiles (functions) at various levels (associated with the image size or the like) are defined so that the standard can be applied to a wide variety of applications. Of these, the most basic one is the main profile at main level (MP@ML).
Referring to
FIG. 44
, an example of an encoder (image signal encoder) according to the MP@ML of the MPEG standard will be described below. An input image signal is supplied to a set of frame memories
1
, and stored therein in the predetermined order. The image data to be encoded is applied, in units of macroblocks, to a motion vector extraction circuit (ME)
2
. The motion vector extraction circuit
2
processes the image data for each frame as an I-picture, a P-picture, or a B-picture according to a predetermined procedure. In the above procedure, the processing mode is predefined for each frame of the image of the sequence, and each frame is processed as an I-picture, a P-picture, or a B-picture corresponding to the predefined processing mode (for example frames are processes in the order of I, B, P, B, P, . . . , B, P). Basically, I-pictures are subjected to intraframe encoding, and P-pictures and B-pictures are subjected to interframe prediction encoding, although the encoding mode for P-pictures and B-pictures is varied adaptively macroblock by macroblock in accordance with the prediction mode as will be described later.
The motion vector extraction circuit
2
extracts a motion vector with reference to a predetermined reference frame so as to perform motion compensation (interframe prediction). The motion compensation (interframe prediction) is performed in one of three modes: forward, backward, and forward-and-backward prediction modes. The prediction for a P-picture is performed only in the forward prediction mode, while the prediction for a B-picture is performed in one of the above-described three modes. The motion vector extraction circuit
2
selects a prediction mode which can lead to a minimum prediction error, and generates a predicted vector in the selected prediction mode.
The prediction error is compared for example with the dispersion of the given macroblock to be encoded. If the dispersion of the macroblock is smaller than the prediction error, prediction compensation encoding is not performed on that macroblock but, instead, intraframe encoding is performed. In this case, the prediction mode is referred to as an intraframe encoding mode. The motion vector extracted by the motion vector extraction circuit
2
and the information indicating the prediction mode employed are supplied to a variable-length encoder
6
and a motion compensation circuit (MC)
12
.
The motion compensation circuit
12
generates a predicted image on the basis of the motion vector. The result is applied to arithmetic operation circuits
3
and
10
. The arithmetic operation circuit
3
calculates the difference between the value of the given macroblock to be encoded and the value of the predicted image. The result is supplied as a difference image signal to a DCT circuit
4
. In the case of an intramacroblock, the arithmetic operation circuit
3
directly transfers the value of the given macroblock to be encoded to the DCT circuit
4
without performing any operation.
The DCT circuit
4
performs a DCT (discrete cosine transform) operation on the input signal thereby generating DCT coefficients. The resultant DCT coefficients are supplied to a quantization circuit (Q)
5
. The quantization circuit
5
quantizes the DCT coefficients in accordance with a quantization step depending on the amount of data stored in a transmission buffer
7
. The quantized data is then supplied to the variable-length encoder
6
.
The variable-length encoder
6
converts the quantized data supplied from the quantization circuit
5
into a variable-length code using for example the Huffman encoding technique, in accordance with the quantization step (scale) supplied from the quantization circuit
5
. The obtained variable-length code is supplied to a transmission buffer
7
.
The variable-length encoder
6
also receives the quantization step (scale) from the quantization circuit
5
and the motion vector as well as the information indicating the employed prediction mode (the intraframe prediction mode, the forward prediction mode, the backward prediction mode, or forward-and-backward prediction mode in which the prediction has been performed) from the motion vector extraction circuit
2
, and converts these received data into variable-length codes.
The transmission buffer
7
stores the received encoded image data temporarily. A quantization control signal corresponding to the amount of data stored in the transmission buffer
7
is fed back to the quantization circuit
5
from the transmission buffer
7
.
If the amount of residual data stored in the transmission buffer
7
reaches an upper allowable limit, the transmission buffer
7
generates a quantization control signal to the quantization circuit
5
so that the following quantization operation is performed using an increased quantization scale thereby decreasing the amount of quantized data. Conversely, if the amount of residual data decreases to a lower allowable limit, the transmission buffer
7
generates a quantization control signal to the quantization circuit
5
so that the following quantization operation is performed using a decreased quantization scale thereby increasing the amount of quantized data. In this way, an overflow or underflow in the transmission buffer
7
is prevented.
The data stored in the transmission buffer
7
is read out at a specified time and output over a transmission line or recorded on a recording medium.
The quantized data output by the quantization circuit
5
is also supplied to an inverse quantization circuit
8
. The inverse quantization circuit
8
performs inverse quantization on the received data in accordance with the quantization step given by the quantization circuit
5
. The data (DCT coefficients generated by means of the inverse quantization) output by the inverse quantization circuit
8
are supplied to an IDCT (inverse DCT) circuit
9
which in turn performs an inverse DCT operation on the received data. The arithmetic operation circuit
10
adds the predicted image signal to the signal output from the IDCT circuit
9
for each macroblock, and stores the resultant signal into a set of frame memories (FM)
11
so that the stored image signal will be used as

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Method and apparatus for encoding enhancement and base layer... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Method and apparatus for encoding enhancement and base layer..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method and apparatus for encoding enhancement and base layer... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2472312

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.