Method and model for regulating the computational and memory...

Pulse or digital communications – Bandwidth reduction or expansion – Television or motion video signal

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C382S243000, C382S305000, C348S419100

Reexamination Certificate

active

06542549

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates method and model for regulating the computational and memory requirements of a compressed bitstream in a video decoder. This invention is useful in the field of multimedia audio-visual coding and compression techniques where the encoder needs to regulate the complexity requirements of the bitstreams it generates. This ensures that decoders conforming to the complexity specification of the standard can successfully decode these bitstreams without running short of resources.
2. Description of the Related Art
In the past, implementers of video decoders that conform to a certain performance capability of a standard are required to ensure that the decoders have sufficient resources to support the worse case scenario that is possible according to the specification. This is not a good engineering practice as usually the worse case scenario represents a case that almost impossible under normal operating conditions. This leads to over engineering and a waste of resources.
Currently within the MPEG-4 standardization, there is an effort to specify complexity bounds for the decoder based on the complexity of the bitstream rather than the worse case scenario. This is a statistical method based on a common unit of complexity measure. The standard will specify a fixed value for the maximum complexity allowed in a compliant bitstream. The decoder is required to provide resources sufficient to decode all compliant bitstreams. The encoder is required to ensure that all bitstreams that are generated do not exceed the maximum complexity bounds and are therefore complaint.
FIG. 1
shows graphically the above concept. In
FIG. 1
, valid bitstreams are distributed to the left of the fixed value of the standard and all compliant encoder are distributed to the right of the fixed value of the standard. The complexity bound is indicated by the straight line in the graph. The abscissa is given in complexity units. On the left of the line are all the conformant bitstreams. The typical distribution of the bitstream is depicted here. The majority of the bitstreams would have a complexity that is much lower than the complexity bound. A few bitstreams will approach this bound. When a bitstream exceeds this bound it is no longer a conformant bitstream and therefore not shown. On the right side of the complexity bound is the distribution of decoders. Most decoders would be designed as closed to the complexity bound as possible in order to save cost. A few decoders may have more resources that are required and these lie further to the right of the graph. A decoder that does not have enough resources to satisfy the complexity requirements of a compliant bitstream will lie to the left of the complexity bound and will be considered non-compliant.
FIG. 2
show a simple complexity measure method where the encoder counts the number of each macroblock type selected and evaluates the complexity of the bitstream based on some predefined cost function given to each of the macroblock type.
FIG. 2
shows a simple method of counting the cost function of the bitstream being generated by calculating the equivalent I-MB units. However, this has the problem of not being able to give the instantaneous complexity measure and does not consider other resource such as memory. Information about the current macroblock is passed to the Macroblock Type Decision, module
201
, where the decision to encode the macroblock in a particular method is made. This decision is then counted by the Cost Function Generator, module
202
, which converts this information into a complexity cost function. The complexity cost function is then fed back to the Macroblock Type Decision module for the future decision.
Modules,
203
to
210
are the typical modules required for a hybrid transform coder. The input picture is partitioned into blocks that are processed by the motion estimation and compensation modules
210
and
209
, respectively. Note that this step is skipped if there is no motion prediction. The motion compensated difference signal is then processed by the DCT transform module
203
. The transform coefficients are then Quantized in the Quantization module
204
, The quantized coefficients are then entropy coded together with the overhead information of the macroblock type and motion vectors in the Variable Length Coding module
205
. The local decoder comprising of modules
206
to
209
reconstructs the coded picture for use in prediction of future pictures. The Inverse Quantization, module
206
, inverse quantizes the coefficients before it is fed into the Inverse DCT, module
207
, where the difference signal is recovered. The difference signal is then added with the motion prediction to form the reconstructed block. These blocks are then stored in the Frame Memory, module
208
, for future use.
Also, in video coding it is inherent that the compression process results in a variable bitrate bitstream. This bitstream is commonly sent over a constant bitrate channel. In order to absorb the instantaneous variation in the bitrate it is common to introduce buffers at the output of the encoder and at the input of the decoder. These buffers serve as reservoir for bits and allow a constant bitrate channel to be connected to an encoder that generates variable bitrate bitstreams as well as to a decoder that consumes bitstreams at a variable bitrate.
The buffer occupancy changes in time, because the rate at which the buffer is being filled and the rate at which it is being emptied are different. However, over a long period of time, the average rate for filling the buffer and the average rate of emptying the buffer can be defined to be the same. Therefore, if we allow a large enough buffer the steady state operation can be achieved. To work correctly the buffer must not become empty (underflow) or be totally filled up (overflow). In order to ensure this constraint, models of the buffer have been presented in the literature such as MPEG-1 and MPEG-2 where the video buffer model allow the behaviour of the variable bitrate decoder connected to a constant bitrate channel. The remainder of the decoder does not need to be model because the video decoding method has been defined at a constant frame rate and each frame having a constant size. Therefore, the constant rate of decoding and the consumption of buffer are well defined in time and the video buffering verifier (VBV) is used to verify whether the buffer memory required in a decoder is less than the defined buffer size by checking the bitstream with its delivery rate function, R(t).
Defining the complexity measure is not sufficient to ensure that the decoder can be designed in a unambiguous way. There are two reasons for this.
The first reason is that the complexity is measured in time. Since the time is sufficiently large, it can accommodate several frames of pictures. The complexity distribution may be such that the resources of the decoder may be exhausted in the instantaneous time while the average complexity is below the limit set. Restricting the window to a shorter time would then restrict the variability of the complexity of the pictures. This means that all pictures must have the constant complexity. This is not good since by the nature of the coding modes different picture types should have different complexities.
The second reason is that the complexity is not just related to the computation time. A second element, which is not captured in the complexity measure, is the memory requirements.
The problem to be solved is therefore to invent a method for regulating the complexity requirements of the bitstream in terms of computational and memory requirements.
Also, in recent developments in the video compression process, a more flexible encoding method that is object oriented has been defined by MPEG. This flexible encoding scheme supports variable number of macroblocks within a video picture and different picture rate such that the rate of decoding and the rate of consumption of the memory are no longer consta

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Method and model for regulating the computational and memory... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Method and model for regulating the computational and memory..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method and model for regulating the computational and memory... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3076857

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.