Electrical computers and digital processing systems: memory – Storage accessing and control – Shared memory area
Reexamination Certificate
1998-06-26
2004-03-09
Sparks, Donald (Department: 2187)
Electrical computers and digital processing systems: memory
Storage accessing and control
Shared memory area
Reexamination Certificate
active
06704846
ABSTRACT:
CROSS-REFERENCE TO RELATED APPLICATIONS
Not applicable.
STATEMENT REGARDING FEDERALLY SPONSORED RESEARCH OR DEVELOPMENT
Not applicable.
BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates generally to the field of digital video compression and particularly to memory arbitration within a digital video decoder. More particularly, the present invention relates to a video decoder including a memory arbitration scheme that combines the advantages of hardware-based memory arbitration with the advantages of software-based memory arbitration.
2. Background of the Invention
Real-time processing of full motion video sequences using a digital recording, playback, or transmission system requires a large number of numerical computations and data transactions in a relatively short amount of time. Motion pictures typically are constructed using multiple still pictures which are displayed one at a time in sequence. To record the video sequence, each still picture, or “frame,” must be digitally mapped onto a rectangular grid of pixels, each pixel representing the light intensity and color for a portion of the frame. In a Red-Green-Blue (RGB) system, each pixel includes three parameters which denote the intensity of the red, green, and blue light components, respectively, of that pixel. In accordance with the system defined by the National Television Standards Committee (NTSC), pixel data may also be described by a luminance parameter, which denotes the light intensity of the pixel, and two chrominance parameters, which describe the color of the pixel.
Although these systems specify only three parameters to describe each pixel, multiple frames must be displayed every second, each frame comprising hundreds of thousands of pixels if displayed on a typical computer monitor or television screen. In addition, it is usually desirable to include other multimedia information such as audio data along with the pixel data. As a result, a typical motion picture may involve many millions of data values that must be processed, stored, or transmitted each second. Because of the difficulty of building systems that can transmit and store audio and video data affordably at such high rates, various types of data compression algorithms have been introduced which allow the motion picture frames to be represented using a reduced amount of data. Video and audio systems which use these compression techniques require less storage space and transmission bandwidth, reducing the overall cost of the systems.
Video compression algorithms employ a number of techniques. Intraframe compression techniques seek to reduce the amount of data needed to describe a single picture frame, while interframe compression techniques reduce the amount of data needed to describe a sequence of pictures by exploiting redundancies between frames. The discrete cosine transform (DCT), used for interframe compression, is a mathematical process for determining a set of coefficients that describe the frequency characteristics of the pixels in a given picture frame. Because DCT coefficients can be converted back to pixel values using a mathematical process known as the Inverse DCT (IDCT), it is common in the art to represent frame data using DCT coefficients instead of the actual pixel values. Because the human eye is more responsive to lower frequencies in an image than to higher frequencies in a picture, a certain amount of high frequency picture information can be discarded or reduced without noticeably affecting the visual quality of a given frame. Once the DCT coefficients are determined, high frequency coefficients also can be quantized, a method which reduces the number of binary digits (or “bits”) required to represent the coefficient values. Reducing the amount of high frequency information and/or quantizing the high frequency coefficients compresses the picture, reducing the amount of data needed to process, store, and transmit the picture.
Other intraframe compression techniques include run level encoding (RLE), zigzag ordering, and variable length encoding. Run-level encoding expresses a data sequence in terms of ordered pairs that consists of a number of zeroes between nonzero coefficients, and the value of the nonzero coefficient that terminates the run of zeroes. Zigzag ordering arranges the DCT components according to frequency, so that coefficients representing similar frequencies are stored and transmitted together. Zigzag ordering increases the effectiveness of the RLE technique, since some frequency components, especially high frequency components, tend to have numerous zero values. Variable length coding allows the data values to be represented using codewords which require, on average, fewer bits than the data values themselves. As a result, variable-length codes can be used to reduce the amount of storage space and transmission bandwidth required by the system. Some examples of video data compression formats are the Joint Photographic Experts Group (JPEG) format and the Graphic Interchange Format (GIF). It should be noted that compression techniques may be classified as either lossless (no image degradation) or lossy (some image degradation and that some compression formats are capable of producing a wide range of image qualities, varying from no degradation (lossless) to moderate or extreme degradation (lossy). For more information on coding, refer to
Digital Communications
by Proakis (McGraw-Hill, 1995) or
Elements of Information Theory
by Cover and Thomas (John Wiley & Sons, 1991).
Interframe compression techniques exploit redundancies between consecutive video frames, known as temporal redundancies. Because moving pictures often involve either very little motion or motion primarily of foreground objects, consecutive frames often are highly similar, increasing the effectiveness of interframe compression. Interframe compression generally involves storing the differences between successive frames in the data file instead of the actual frame data itself. Interframe compression begins by storing the entire image of a reference frame, generally in a moderately compressed format. Successive frames are compared with the reference frame, and only the differences between the reference frame and the successive frames are stored. Periodically, such as when new scenes are displayed, new reference frames are stored, and subsequent comparisons begin from this new reference point. The level of interframe compression achieved, known as the compression ratio, may be content-dependent; i.e., if the video clip includes many abrupt scene transitions from one image to another, the compression is less efficient. It is noted that the interframe compression ratio may be held constant while varying the video quality, however. Examples of names video compression techniques which use interframe compression are MPEG, DVI, and Indeo, among others. Using known techniques, the interframe-compressed pictures can be later reconstructed by a video decoder.
The International Organization for Standardization (ISO) has developed a number of compression standards for audio/video systems, namely the Motion Pictures Experts Group (MPEG) standards, which include MPEG-1 and MPEG-2. The ISO publishes the MPEG standards under the official name of ISO/IEC JTC1 SC29 WG11. The MPEG-1 standard defines data reduction techniques that include block-based motion compensation prediction (MCP), which generally involves differential pulse code modulation (DPCM). The MPEG-2 standard is similar to the MPEG-1 standard but includes extensions to cover a wider range of applications, including interlaced digital video such as high definition television (HDTV).
An MPEG data stream includes three types of pictures, referred to as the Intraframe (or “I-frame”), the Predicted frame (or “P-frame”), and the Bi-directional Interpolated frame (or “B-frame”). The I-frames contain the video data for an entire frame of video and are typically placed every 10 to 15 frames. Intraframes generally are only moderately compressed. Predicted frames are encoded with r
Neuman Darren D.
Patwardhan Arvind B.
Wu Scarlett Z.
Chace Christian P.
Conley & Rose, P.C.
LSI Logic Corporation
Sparks Donald
LandOfFree
Dynamic memory arbitration in an MPEG-2 decoding System does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Dynamic memory arbitration in an MPEG-2 decoding System, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Dynamic memory arbitration in an MPEG-2 decoding System will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3288173