Reducing the memory required for decompression by storing...

Pulse or digital communications – Bandwidth reduction or expansion – Television or motion video signal

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C375S240250

Reexamination Certificate

active

06668019

ABSTRACT:

BACKGROUND
The present invention relates to the field of video decompression devices, and is more specifically directed to methods and circuits for reducing the memory required during decompression by storing compressed information using discrete cosine transform (DCT) based techniques.
The size of a digital representation of uncompressed video images dopends on the resolution and color depth of the image. A movie composed of a sequence of uncompressed video images, and accompanying audio signals quickly becomes too large to fit entirely onto conventional recording medium, such as a compact disk (CD). Moreover, transmitting such an uncompressed movie over a communication link is prohibitively expensive because of the excessive quantity of data to be transmitted.
It is therefore advantageous to compress video and audio sequences before they are transmitted or stored. A great deal of effort is being expanded to develop systems to compress these sequences. There are several coding standards currently used that are based on the DCT algorithm including MPEG-1, MPEG-2, H.261, and H.263. (MPEG is an acronym for “Motion Picture Expert Group”, a committee of the International Organization for Standardization, ISO.) The MPEG-1, MPEG-2, H.261 and H.263 standards include decompression protocols that describe how an encoded (i.e. compressed) bitstream is to be decoded (i.e. decompressed). The encoding can be done in any manner, as long as the resulting bitstream complies with the standard.
Video and/or audio compression devices (hereinafter encoders) are used to encode the video and/or audio sequence before the sequence is transmitted or stored. The resulting encoded bitstream is decoded by a video and/or audio decompression device (hereinafter decoder) before the video and/or audio sequence is output. However, a bitstream can only be decoded by a decoder if it complies with the standard used by the encoder. To be able to decode the bitstream on a large number of systems, it is advantageous to encode the video and/or audio sequences according to a well accepted encoding/decoding standard. The MPEG standards are currently well accepted standards for one way communication. H.261, and H.263 are currently well accepted standards for two way communication, such as video telephony.
Once decoded, the decoded video and audio sequences can be output on an electronic system dedicated to outputting video and audio, such as a television or a video cassette recorder (VCR) player, or on an electronic system where image display and audio is just one feature of the system, such as a computer. A decoder needs to be added to these electronic systems to allow them to decode the compressed bitstream into uncompressed data, before it can be output. An encoder needs to be added to allow such electronic systems to compress video and/or audio sequences that are to be transmitted or stored. Both the encoder and decoder need to be added for two way communication.
FIG. 1A
shows a block diagram of the architecture of a typical decoder, such as an MPEG-2 decoder
10
. The decoder
10
can be both a video and audio decoder or just a video decoder, where the audio portion of the decoder
10
can be performed in any known conventional way. The encoded bitstream is received by an input buffer, typically a first-in-first-out (FIFO) buffer
30
, hereinafter FIFO
30
, although the buffer can be any type of memory. The FIFO
30
buffers the incoming encoded bitstream as previously received data is being decoded.
The encoded bitstream for video contains compressed frames. A frame is a data structure representing the encoded data for one displayable image in the video sequence. This data structure consists of one two-dimensional array of luminance pixels, and two two-dimensional arrays of chrominance samples, i.e., color difference samples. The color difference samples are typically sampled at half the sampling rate of the luminance samples in both vertical and horizontal directions, producing a sampling mode of 4:2:0 (luminance:chrominance:chrominance). Although, the color difference can also be sampled at other frequencies, for example one-half the sampling rate of the luminance in the vertical direction and the same sampling rate as the luminance in the horizontal direction, producing a sampling mode of 4:2:2.
A frame is typically further subdivided into smaller subunits, such as macroblocks. A macroblock is a data structure having a 16×16 array of luminance samples and two 8×8 array of adjacent chrominance samples. The macroblock contains a header portion having motion compensation information and 4 block data structures. A block is the basic unit for DCT-based transform coding and is a data structure encoding an 8×8 sub array of pixels. A macroblock represents four luminance blocks and two chrominance blocks.
Both NMPEG-1 and MPEG-2 support multiple types of coded frames: Intra (I) frames, Forward Predicted (P) frames, and Bidirectionally Predicted (B) frames. I frames contain only intrapicture coding. P and B frames may contain both intrapicture and interpicture coding. I and P frames are used as reference frames for interpicture coding.
In interpicture coding, the redundancy between two frames is eliminated as much as possible and the residual differences, i.e. interpicture prediction errors, between the two frames are transmitted, the frame being decoded and a prediction frame. Motion vectors are also transmitted in interpicture coding that uses motion compensation. The motion vectors describe how far, and in what direction the macroblock has moved compared to the prediction macroblock. Interpicture coding equires the decoder
10
to have access to the previous and/or future images, i.e. the I and/or P frames, that contain information needed to decode or encode the current image. These previous and/or future images need to be stored and then used to decode the current image.
Intrapicture coding for I frames involves the reduction of redundancy between the original pixels in the frame using block-based DCT techniques, although other coding techniques can be used. For P and B frames, intrapicture coding involves using the same DCT-based techniques to remove redundancy between the interpicture prediction error pixels.
Referring again to FIG.
1
A. The output of the FIFO
30
is coupled to a macroblock header parser
36
. The header parser
36
parses the information into macroblocks, and then parses the macroblocks and sends the header portion of each macroblock to an address calculation circuit
96
. The address calculation circuit
96
determines the type of prediction to be performed to determine which prediction frames a motion compensation engine will need to access. Using the motion vector information, the address calculation circuit
96
also determines the address in memory
160
where the prediction frame, and the prediction macroblock within the frame, that is needed to decode the motion compensated prediction for the given macroblock to be decoded is located.
The prediction macroblock is obtained from memory
160
and input into the half-pel filter
78
, which is coupled to the address calculation circuit
96
. Typically there is a DMA engine
162
in the decoder that controls all of the interfaces with the memory
180
. The half-pel filter
78
performs vertical and horizontal half-pixel interpolation on the fetched prediction macroblock as dictated by the motion vectors. This obtains prediction macroblocks.
As explained earlier, pixel blocks in I frames and prediction error pixels blocks in P or B frames are encoded using DCT-based techniques. In this approach, the pixels are transformed using the DCT into DCT coefficients. These coefficients are then quantized in accordance with quantization tables. The quantized DCT coefficients are then further encoded as variable length Huffinan codes to maximize efficiency, with the most frequently repeated values given the smallest codes and increasing the length of the codes as the frequency of the values decreases. Although codes other than the Huf

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Reducing the memory required for decompression by storing... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Reducing the memory required for decompression by storing..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Reducing the memory required for decompression by storing... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3177607

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.