Methods and apparatus for reducing drift due to averaging in...

Pulse or digital communications – Bandwidth reduction or expansion – Television or motion video signal

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C375S240250

Reexamination Certificate

active

06539058

ABSTRACT:

FIELD OF THE INVENTION
The present invention relates to methods and apparatus for reducing and/or eliminating drift in luminance and/or chrominance values that can occur in various known reduced resolution, e.g., downsampling, video decoders.
BACKGROUND OF THE INVENTION
Various digital applications, such as digital video, involve the processing, storage, and transmission of relatively large amounts of digital data representing, e.g., one or more digital images. Each image normally comprises a large number of pixels. Each pixel is represented in digital form using one or more numerical values referred to herein as pixel values. A pixel value provides, e.g., luminance or chrominance information corresponding to a single pixel.
In order to reduce the amount of digital data that must be stored and transmitted in conjunction with digital applications, various digital coding techniques, e.g., transform encoding techniques, have been developed. Discrete cosine transform (DCT) encoding is a particularly common form of transform encoding. The data (pixel values) representing several pixels, e.g., an 8×8 block of pixels, is frequently encoded using DCT coding to generate a series of AC and DC DCT coefficient values. The DCT coefficient values represent the 8×8 block of pixels in encoded form.
DCT encoding is frequently used in combination with motion compensated prediction techniques which are used to further reduce the amount of data required to represent a series of digital images. Motion compensated prediction involves the coding of all or a portion of an image by referring to a portion of one or more other images, e.g., reference frames. Motion vectors are used when encoding images via reference to other frame(s). A motion vector identifies pixels of a reference frame to be used when making a motion compensated prediction to reconstruct an image. The pixels to be used are identified in a motion vector through the use of horizontal and vertical offsets which are interpreted relative to the location of a macroblock that is being decoded.
One standard proposed for the coding of motion pictures, commonly referred to as the MPEG-2 standard, described in ISO/IEC 13818-2 (1996) Generic Coding of Moving Picture and Associated Audio Information: Video (hereinafter referred to as the “MPEG-2” reference), relies heavily on the use of DCT and motion compensated prediction coding techniques. An earlier version of MPEG, referred to as MPEG-1 also supports the use of motion compensated prediction.
In accordance with MPEG-2 images, e.g., frames, can be coded as intra-coded (I) frames, predictively coded (P) frames, or bi-directional coded (B) frames. I frames are encoded without the use of motion compensation. P frames are encoded using motion compensation and a reference to a single anchor frame. The single anchor frame is a preceeding frame in the sequence of frames being decoded. B frames are encoded using a reference to two anchor frames, e.g., a preceding frame and a subsequent frame. Reference to the subsequent frame is achieved using a backward motion vector while reference to the preceding frame is achieved using a forward motion vector. In MPEG, I and P frames may be used as anchor frames for prediction purposes. B frames are not used as anchor frames.
MPEG-1 and MPEG-2 both support the specification of motion vector information, i.e., vertical and horizontal offsets, in half-pixel (half-pel) units. These standards specify that bilinear interpolation, which involves a division operation, is to be used when determining predicted pixel values when non-integer offsets are specified. These standards also specify that chrominance motion vector values are to be obtained by scaling transmitted luminance motion vector values.
As will be discussed in detail below, the MPEG standards specify that the result of the division operation performed as part of a prediction should be rounded to the nearest integer. The MPEG standards further specify that when the result of the division operation has a fractional part of one half, the result is to be rounded away from zero. Since the quantities involved in prediction calculations are non-negative, this results in rounding a fractional part of one half up to the next highest integer. The specified rounding up results in an intentional biasing of pixel values.
As a result of MPEG's integer rounding procedure compliant motion compensated prediction modules normally generate integer pixel values as their output. In addition, pixel values generated by performing an inverse discrete cosine operation are normally output as integer values. This simplifies subsequent processing by eliminating the need to handle fractional values.
MPEG encoders are designed with the expectation that data generated by MPEG encoders will be decoded in accordance with the above discussed MPEG specified integer rounding operation being performed at decoding time. Because of the predictable nature of the rounding operation, MPEG encoders can encode data in such a manner that, when all the encoded data is decoded by a fully compliant MPEG decoder, the rounding that occurs over multiple sequential predictions will not cause unanticipated changes in brightness or color sometimes referred to as drift.
Various approaches have been taken to implement low cost video decoders capable of decoding and displaying digital video data. Many of these approaches involve one or more data reduction operations, e.g., downsampling, designed to reduce the amount of encoded video data that must be stored and processed by a video decoder. A video decoder which performs downsampling is referred to as a “downsampling” video decoder. Because such decoders produce reduced resolution images, they are also sometimes referred to as “reduced resolution” decoders. Downsampling video decoders are discussed in U.S. Pat. No. 5,635,985 which is hereby expressly incorporated by reference.
FIG. 1
illustrates a known downsampling video decoder
10
. The decoder
10
includes preparser
12
, a syntax parser and variable length decoding (VLD) circuit
14
, an inverse quantization circuit
16
, an inverse discrete cosine transform (IDCT) circuit
18
, a downsampler
20
, summer
22
, switch
24
, memory
30
, a pair of motion compensated prediction modules
26
,
27
and a select/average predictions circuit
28
. The memory
30
includes a coded data buffer
32
and a reference frame store
34
. The various components of the decoder
10
are coupled together as illustrated in FIG.
1
.
The preparser
12
receives encoded video data and selectively discards portions of the received data prior to storage in the coded data buffer
32
. The encoded data from the buffer
32
is supplied to the input of the syntax parser and VLD circuit
14
. The circuit
14
provides motion data and other motion prediction information to the motion compensated prediction modules
26
,
27
. In addition, it parses and variable length decodes the received data. A data output of the syntax parser and VLD circuit
14
is coupled to an input of the inverse quantization circuit
16
.
The inverse quantization circuit
16
generates a series of DCT coefficients which are supplied to the IDCT circuit
18
. From the received DCT coefficients, the IDCT circuit
18
generates a plurality of integer pixel values. In the case of intra-coded images, e.g., I frames, these values fully represent the image being decoded. In the case of inter-coded images, e.g., P and B frames, the output of the IDCT circuit
18
represents image (difference) data which is to be combined with additional image data to form a complete representation of the image or image portion being decoded. The additional image data, with which the output of the IDCT circuit is to be combined, is generated through the use of one or more received motion vectors and stored reference frames. The reference frames are obtained by the MCP modules
26
,
27
from the reference frame store
34
.
In order to reduce the amount of decoded video data that must be stored in the memory
30
, the do

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Methods and apparatus for reducing drift due to averaging in... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Methods and apparatus for reducing drift due to averaging in..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Methods and apparatus for reducing drift due to averaging in... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3042865

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.