RAM-based search engine for orthogonal-sum block match...

Image analysis – Image compression or coding – Interframe coding

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C375S240170

Reexamination Certificate

active

06360015

ABSTRACT:

BACKGROUND OF THE INVENTION
The present invention relates generally to digital video compression, and, more particularly, to a motion estimation search engine for a digital video encoder that is simpler, faster, and less expensive than the presently available technology permits.
Many different compression algorithms have been developed in the past for digitally encoding video and audio information (hereinafter referred to generically as “digital video data stream”) in order to minimize the bandwidth required to transmit this digital video data stream for a given picture quality. Several multimedia specification committees have established and proposed standards for encoding/compressing and decoding/decompressing audio and video information. The most widely accepted international standards have been proposed by the Moving Pictures Expert Group (MPEG), and are generally referred to as the MPEG-1 and MPEG-2 standards. Officially, the MPEG-1 standard is specified in the ISO/IEC 11172-2 standard specification document, which is herein incorporated by reference, and the MPEG-2 standard is specified in the ISO/IEC 13818-2 standard specification document, which is also herein incorporated by reference. These MPEG standards for moving picture compression are used in a variety of current video playback products, including digital versatile (or video) disk (DVD) players, multimedia PCs having DVD playback capability, and satellite broadcast digital video. More recently, the Advanced Television Standards Committee (ATSC) announced that the MPEG-2 standard will be used as the standard for Digital HDTV transmission over terrestrial and cable television networks. The ATSC published the
Guide to the Use of the ATSC Digital Television Standard
on Oct. 4, 1995, and this publication is also herein incorporated by reference.
In general, in accordance with the MPEG standards, the audio and video data comprising a multimedia data stream (or “bit stream”) are encoded/compressed in an intelligent manner using a compression technique generally known as “motion coding”. More particularly, rather than transmitting each video frame in its entirety, MPEG uses motion estimation for only those parts of sequential pictures that vary due to motion, where possible. In general, the picture elements or “pixels” of a picture are specified relative to those of a previously transmitted reference or “anchor” picture using differential or “residual” video, as well as so-called “motion vectors” that specify the location of a 16-by-16 array of pixels or “macroblock” within the current picture relative to its original location within the anchor picture. Three main types of video frames or pictures are specified by MPEG, namely, I-type, P-type, and B-type pictures.
An I-type picture is coded using only the information contained in that picture, and hence, is referred to as an “intra-coded” or simply, “intra” picture.
A P-type picture is coded/compressed using motion compensated prediction (or “motion estimation”) based upon information from a past reference (or “anchor”) picture (either I-type or P-type), and hence, is referred to as a “predictive” or “predicted” picture.
A B-type picture is coded/compressed using motion compensated prediction (or “motion estimation”) based upon information from either a past and or a future reference picture (either I-type or P-type), or both, and hence, is referred to as a “bidirectional” picture. B-type pictures are usually inserted between I-type or P-type pictures, or combinations of either.
The term “intra picture” is used herein to refer to I-type pictures, and the term “non-intra picture” is used herein to refer to both P-type and B-type pictures. It should be mentioned that although the frame rate of the video data represented by an MPEG bit stream is constant, the amount of data required to represent each frame can be different, e.g., so that one frame of video data (e.g., {fraction (1/30)} of a second of playback time) can be represented by x bytes of encoded data, while another frame of video data can be represented by only a fraction (e.g., 5%) of x bytes of encoded data. Since the frame update rate is constant during playback, the data rate is variable.
In general, the encoding of an MPEG video data stream requires a number of steps. The first of these steps consists of partitioning each picture into macroblocks. Next, in theory, each macroblock of each “non-intra” picture in the MPEG video data stream is compared with all possible 16-by-16 pixel arrays located within specified vertical and horizontal search ranges of the current macroblock's corresponding location in the anchor picture(s). This theoretical “full search algorithm” (i.e., searching through every possible block in the search region for the best match) always produces the best match, but is seldom used in real-world applications because of the tremendous amount of calculations that would be required, e.g., for a block size of N×N and a search region of (N+2w) by (N+2w), the distortion function MAE has to be calculated (2w+1)
2
times for each block, which is a tremendous amount of calculations. Rather, it is used only as a reference or benchmark to enable comparison of different more practical motion estimation algorithms that can be executed far faster and with far fewer computations. These more practical motion estimation algorithms are generally referred to as “fast search algorithms”.
The aforementioned search or “motion estimation” procedure, for a given prediction mode, results in a motion vector that corresponds to the position of the closest-matching macroblock (according to a specified matching criterion) in the anchor picture within the specified search range. Once the prediction mode and motion vector(s) have been determined, the pixel values of the closest-matching macroblock are subtracted from the corresponding pixels of the current macroblock, and the resulting 16-by-16 array of differential pixels is then transformed into 8-by-8 “blocks,” on each of which is performed a discrete cosine transform (DCT), the resulting coefficients of which are each quantized and Huffman-encoded (as are the prediction type, motion vectors, and other information pertaining to the macroblock) to generate the MPEG bit stream. If no adequate macroblock match is detected in the anchor picture, or if the current picture is an intra, or “I-” picture, the above procedures are performed on the actual pixels of the current macroblock (i.e., no difference is taken with respect to pixels in any other picture), and the macroblock is designated an “intra” macroblock.
For all MPEG-2 prediction modes, the fundamental technique of motion estimation consists of comparing the current macroblock with a given 16-by-16 pixel array in the anchor picture, estimating the quality of the match according to the specified metric, and repeating this procedure for every such 16-by-16 pixel array located within the search range. The hardware or software apparatus that performs this search is usually termed the “search engine,” and there exists a number of well-known criteria for determining the quality of the match. Among the best-known criteria are the Minimum Absolute Error (MAE), in which the metic consists of the sum of the absolute values of the differences of each of the 256 pixels in the macroblock with the corresponding pixel in the matching anchor picture macroblock; and the Minimum Square Error (MSE), in which the metric consists of the sum of the squares of the above pixel differences. In either case, the match having the smallest value of the corresponding sum is selected as the best match within the specified search range, and its horizontal and vertical positions relative to the current macroblock therefore constitute the motion vector. If the resulting minimum sum is nevertheless deemed too large, a suitable match does not exist for the current macroblock, and it is coded as an intra macroblock. For the purposes of the present invention, either of the above two criteria, or any other suitable criterion

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

RAM-based search engine for orthogonal-sum block match... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with RAM-based search engine for orthogonal-sum block match..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and RAM-based search engine for orthogonal-sum block match... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2831679

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.