Detection of a change of scene in a motion estimator of a...

Pulse or digital communications – Bandwidth reduction or expansion – Television or motion video signal

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

Reexamination Certificate

active

06480543

ABSTRACT:

FIELD OF THE INVENTION
The invention relates to the field of electronics, and, more particularly, to a video encoder.
BACKGROUND OF THE INVENTION
Motion estimation is based upon placing a set of pixels of a certain field of a picture in a position of the same field of the successive picture. This is done by translating the preceding picture. Consequently, the transpositions of objects may expose to the video camera parts of the picture that were not visible before, as well as changes of their shape, e.g., zooming, etc.
The family of algorithms suitable to identify and associate these portions of images is generally referred to as motion estimation. Such an association permits calculation of a portion of the difference image by removing the redundant temporal information, which makes more effective the subsequent processing of the compression by DCT, quantization and entropy coding.
The MPEG-2 standard provides an example of the method as discussed above. A typical block diagram of a video MPEG-2 coder is depicted in FIG.
1
. Such a system is made up of the following functional blocks.
1) Field ordinator. This block is formed of one or several field memories outputting the fields in the coding order required by the MPEG standard. For example, if the input sequence is I B B P B B P etc., the output order will be I P B B P B B . . . , where the I, P and B fields are described below.
I(Intra coded picture) is a field and/or a semifield containing temporal redundance.
P(Predicted-picture) is a field and/or semifield from which the temporal redundance with respect to the preceding I or P fields (previously co-decoded) has been removed.
B(Bidirectionally predicted-picture) is a field and/or a semifield whose temporal redundance with respect to the preceding I and subsequent P fields (or preceding P and successive P fields) has been removed. In both cases the I and P pictures must be considered as already co/decoded.
Each frame buffer in the format 4:2:0 occupies the following memory space:
standard PAL
720 × 576 × 8 for the
luminance
(Y) = 3,317,760 bit
360 × 288 × 8 for the
chrominance
(U) = 829,440 bit
360 × 288 × 8 for the
chrominance
(V) = 829,440 bit
total Y + U + V


= 4,976,640 bit
standard NTSC
720 × 480 × for the
luminance
(Y) = 2,2764,800 bit
360 × 240 × 8 for the
chrominance
(U) = 691,200 bit
360 × 240 × 8 for the
chrominance
(V) = 691,200 bit
total Y + U + V


= 4,147,200 bit
2) Motion estimator. This block removes the temporal redundance from the P and B pictures.
3) DCT. This block implements the discrete-cosine transform (DCT) according to the MPEG-2 standard. The I picture and the error pictures P and B are divided in 8*8 blocks of pixels Y, U, V on which the DCT transform is performed.
4) Quantizer Q. An 8*8 block resulting from the DCT transform is then divided by a quantizing matrix to reduce the magnitude of the DCT coefficients. The information associated with the highest frequencies less visible to human sight tends to be removed. The result is reordered and sent to the successive block.
5) Variable Length Coding (VLC). The codification words output from the quantizer tend to contain a large number of null coefficients followed by nonnull values. The null values preceding the first nonnull value are counted, and the count figure forms the first portion of a codification word. The second portion of which represents the nonnull coefficient.
These paired values tend to assume values more probable than others. The most probable ones are coded with relatively short words composed of 2, 3 or 4 bits while the least probable are coded with longer words. Statistically, the number of output bits is less than in the case that such methods are not implemented.
6) Multiplexer and Buffer. Data generated by the variable length coder, the quantizing matrices, the motion vectors and other syntactic elements are assembled for constructing the final syntax processed by the MPEG-2 standard. The resulting bit stream is stored in a memory buffer, the limit size of which is defined by the MPEG-2 standard and cannot be overfilled. The quantizer block Q supports the limit by adjusting the division of the DCT 8*8 blocks depending on the size of such a memory buffer, and on the energy of the 8*8 source block taken upstream of the motion estimation and DCT transform process.
7) Inverse Variable Length Coding (I-VLC). The variable length coding functions specified above are executed in an inverse order.
8) Inverse Quantization (IQ). The words output by the I-VLC block are reordered in the 8*8 block structure, which is multiplied by the same quantizing matrix used for its preceding coding.
9) Inverse DCT (I-DCT). The DCT transform function is inverted and applied to the 8*8 block output by the inverse quantization process. This permits changing from spatial frequency domain to the pixel domain.
10) Motion Compensation and Storage. At the output of the I-DCT block the following may alternatively be present. A decoded I picture (or semipicture) be stored in a respective memory buffer for removing the temporal redundance with respect thereto from subsequent P and B pictures. A decoded prediction error picture (semipicture) P or B that must be summed to the information removed previously during the motion estimation phase. In case of a P picture, such a resulting sum stored in a dedicated memory buffer is used during the motion estimation process for the successive P pictures and B pictures. These field memories are generally distinct from the field memories that are used for re-arranging the blocks.
11) Display Unit. This unit converts the pictures from the format 4:2:0 to the format 4:2:2, and generates the interlaced format for displaying the images. The functional blocks depicted in
FIG. 1
are provided in an architecture implementing the above described coder, as shown in
FIG. 2
a
. The field ordinator block, the motion compensation and storage block for storing the already reconstructed P and I pictures, and the multiplexer and buffer blocks for storing the bitstream produced by the MPEG-2 coding are integrated in memory devices external to the integrated circuit forming the core of the coder. The decoder accesses the external memory (DRAM) through a single interface managed by an integrated controller.
Moreover, the preprocessing block converts the received images from the format 4:2:2 to the format 4:2:0 by filtering and subsampling the chrominance. The post-processing block implements a reverse function during the decoding and displaying phase of the images. The coding phase also uses a decoding step for generating the reference pictures to make the motion estimation operative. For example, the first I picture is coded, then it is decoded and stored as described above. The first I picture is used for calculating the prediction error that will be used to code the subsequent P and B pictures.
The playback phase of the data stream previously generated by the coding process uses only the inverse functional blocks (I-VLC, I-Q, I-DCT, etc.) and not the direct functional blocks. In other words, the coding and the decoding implemented for the subsequent displaying of the images are nonconcurrent processes within the integrated architecture. The purpose or performance of the motion algorithm estimation is that of predicting images/semifields in a sequence. These sequences are obtained as a composition of whole pixel blocks referred to as predictors, which are originated from preceding or future images/semifields.
The MPEG-2 standard includes three types of pictures/semifields:
I pictures (Intra coded picture) are pictures that are not submitted to motion estimation. They contain temporal redundancy and are fundamental for the picture coding of the other two types.
P pictures (predicted picture) are the pictures whose temporal redundancy has been removed through the motion estimation with respect to the I or P pictures prec

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Detection of a change of scene in a motion estimator of a... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Detection of a change of scene in a motion estimator of a..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Detection of a change of scene in a motion estimator of a... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2949486

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.