Method and apparatus for de-interlacing interlaced content...

Interactive video distribution systems – Video distribution system components – Receiver

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C348S448000, C348S452000, C348S699000

Reexamination Certificate

active

06269484

ABSTRACT:

The invention relates generally to methods and devices for de-interlacing video for display on a progressive display and more particularly to methods and apparatus for de-interlacing interlaced content using decoding motion vector data from compressed video streams.
BACKGROUND OF THE INVENTION
For computer monitors that are mostly non-interlaced or progressive type display devices, video images or graphic images must be displayed by sequentially displaying each successive line of pixel data sequentially for a frame of an image. One frame of data is typically displayed on the screen 60 times per second. In contrast, interlaced display devices, such as television displays, typically display images using even and odd line interlacing. For example, where a frame of interlaced video consists of one field of even line data and one field of odd line data, typically each field is alternately displayed 30 times per second resulting in a complete frame being refreshed 30 times a second. However, progressive displays need to display a complete frame 60 times per second. Therefore when interlaced video is the video input for a progressive display, video rendering systems have to generate pixel data for scan lines that are not received in time for the next frame update. This process is called de-interlacing. When such interlaced signals are received for display on a progressive computer display, picture quality problems can arise especially when motion is occurring in the picture where inferior methods of de-interlacing are used.
The problem exists particularly for personal computers having multimedia capabilities since interlaced video information received from conventional video tapes, cable television broadcasters (CATV), digital video disks (DVD's) and direct broadcast satellite (DBS) systems must be de-interlaced for suitable display on a progressive (non-interlaced based) display device.
A current video compression standard, known as MPEG-2 and hereby incorporated by reference, specifies the compression format and decoding format for interlaced and non-interlaced video picture information. MPEG-2 video streams have picture data divided as blocks of data. These blocks of data are referred to as macroblocks in the MPEG-2 standard. Generally, a macroblock of data is a collection of Y, Cr, Cb (color space) blocks which have common motion parameters. Therefore, a macroblock of data contains a section of the luminance component and spatially corresponding chrominance components. A macroblock of data can either refer to source, decoded data or to the corresponding coded data elements. Typically, a macroblock of data (macroblocks) consists of blocks of 16 pixels by 16 pixels of Y data and 8 by 8, or 16 by 16 pixels of Cr and Cb data in one field or frame of picture data.
Generally, in MPEG-2 systems, two fields of a frame may be coded separately to form two field pictures. Alternatively, the two fields can be coded together as a frame. This is known generally as a frame picture. Both frame pictures and field pictures may be used in a single video sequence. A picture consists of a luminance matrix Y, and two chrominance matrices (Cb and Cr).
MPEG-2 video streams also include data known as motion vector data that is solely used by a decoder to efficiently decompress the encoded macroblock of data. A motion vector, referred to herein as a decoding motion vector, is a two-dimensional vector used for motion compensation that provides an offset from a coordinate position in a current picture to the coordinates in a reference picture. The decoder uses the decoding motion vector data stream to reference pixel data from frames already decoded so that more compact difference data can be sent instead of absolute data for those referenced pixels or macroblocks. In other words, the motion vector data is used to decompress the picture data in the video stream. Also, zero decoding motion vectors may indicate that there was no change in pixel data from a previously decoded picture.
In MPEG-2 video streams, decoding motion vectors are typically assigned to a high percentage of macroblocks. Macroblocks can be in either field pictures or frame pictures. When in a field picture it is field predicted. When in a frame picture, it can be field predicted and frame predicted.
A macroblock of data defined in the MPEG-2 standard includes among other things, macroblock mode data, decoding motion vector data and coded block pattern data. Macroblock mode data are bits that are analyzed for de-interlacing purposes. For example, macroblock mode data can include bits indicating whether the data is intracoded. Coded block pattern data are bits indicating which blocks are coded.
Intracoded macroblocks are blocks of data that are not temporarily predicted from a previously reconstructed picture. Non-intracoded macroblocks have a decoding motion vector(s) and are temporarily predicted from a previously reconstructed picture. In an MPEG-2 video stream, a picture structure can be either field coded or frame coded.
Several basic ways of de-interlacing interlaced video information include a “weave” method and a “bob” method. With the “weave”, or merge method, successive even and odd fields are merged. Each frame to be displayed is constructed by interleaving the scan lines of a pair of fields. Generally, the result is that the frame rate is one-half the field display rate. This “weave” method is generally most effective with areas of a picture that do not have motion over successive frames because it provides more pixel data detail for non-moving objects. However, when motion does occur, artifacts appear in the form of double images of a moving object. Jagged edges appear around the periphery of a moving object causing poor image quality.
In contrast to the “weave” method, the “bob” method displays single fields as frames. The missing scan lines are interpolated from available lines in the file making the frame rate the same as the original field rate. This is sometimes referred to as intraframe de-interlacing. The most often used methods are line repetition, line averaging and edge-adaptive spatial interpolation. Again, this de-interlacing method is also not typically used with some form of motion detection so that non-moving images can appear to be blurry from loss of image detail. This can result from inaccurate interpolation of pixel data.
Another method of de-interlacing is known as motion adaptive filtering wherein different filtering (de-interlacing) strategies or algorithms are used in picture areas with and without motion. Generally, intraframe de-interlacing is used in picture areas with motion and field merging (weaving) is used in picture areas without motion. Coefficients in the adaptive filters are based on motion detection functions. However, such systems typically have to determine motion on a pixel by pixel basis from decoded picture information. This can add unnecessary computation time and cost. This is equivalent to using different filtering methods on different picture areas. Additional discussion on video processing techniques can be found in a book entitled “Digital Video Processing,” written by A Murat Tekalp and published by Prentice Hall.
One proposed type of de-interlacing system for a video stream adds an additional assistance signal in the encoded stream which is then decoded in addition to decoding motion information. The additional assistance signal is transmitted to a special decoder in a vertical blanking interval. The additional assistance signal enables the decoder to choose from a number of predetermined de-interlacing modes. The use of an additional signal requires modification of the encoder and corresponding modification to the decoder.
Consequently, there exists a need for de-interlacing system for displaying interlaced content on a progressive display device that does not require the generation and sending of additional information about motion. There also exists a need for an MPEG-2 de-interlacing system that has a comparatively low computation complexity that can cost effectively imp

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Method and apparatus for de-interlacing interlaced content... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Method and apparatus for de-interlacing interlaced content..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method and apparatus for de-interlacing interlaced content... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2524638

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.