Method and system for video compression

Pulse or digital communications – Bandwidth reduction or expansion – Television or motion video signal

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C375S240190

Reexamination Certificate

active

06532265

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Field of the Invention
The invention relates to video compression techniques.
2. Description of the Related Art
A video information stream comprises of a sequence of video frames. Each of the video frames can be considered as a still image. The video frames are represented in a digital system as an array of pixels. The pixels comprise of luminance or light intensity and chrominance or color information. The light and color information is stored in a memory of the digital system. For each of the pixels some bits are reserved. From a programming point of view each video frame can be considered as a two-dimensional data type. Note that fields from an interlaced video sequence can also be considered as video frames.
In principle when the video information stream must be transmitted between two digital systems, this can be realized by sending the video frames sequentially in time, for instance by sending pixels and thus bits sequentially in time.
There exist however more elaborated transmission schemes enabling faster and more reliable communication between two the digital systems. The transmission schemes are based on encoding the video information stream in the transmitting digital system and decoding the encoded video information stream in the receiving digital system. Note that the same principles can be exploited for storage purposes.
During encoding, the original video information stream is transformed into another digital representation. The digital representation is then transmitted. The goal of decoding is to reconstruct the original video information stream from the digital representation completely when lossless compression is used or approximately when lossy compression is used.
The encoding is based on the fact that temporal nearby video frames are often quite similar up to some motion. The arrays of pixels of temporal nearby video frames often contain the same luminance and chrominance information except that the coordinate places or pixel positions of the information in the arrays are shifted or displaced. Shifting or displacement in position as function of time defines a motion. The motion is characterized by a motion vector. Note that although the described similarity up to some motion of video frames appears only in ideal cases, it forms the basis of encoding based on a translational motion model. The transformation between a video frame and a temporal nearby video frame can also be a more complicated transformation. Such a complicated transformation can form the basis of a more complicated encoding method.
Encoding of the video information stream is done by performing encoding of the video frames of the sequence with respect to other video frames of the sequence. The other video frames are denoted reference video frames.
The encoding is in principle based on motion estimation of the motion between a video frame under consideration and a reference video frame. The motion estimation defines a motion vector. Motion estimation is based on calculating an error norm which is determined by a norm of the difference between two video frames. Often the sum of absolute differences of pixel values of pixels of the reference frame and the video frame under consideration is used as error norm. Other error norms can also be used. In the prior art essentially all error norms are based on differences between pixel values of pixels of both frames.
After the motion is estimated, motion compensation is performed. The motion compensation comprises of constructing a new motion compensated video frame from the reference video frame by applying the motion, defined by the motion vector. The motion compensated video frame comprises of the pixels of the reference video frame but located at different coordinate places. The motion compensated video frame can then be subtracted from the video frame under consideration. This results in an error video frame. Due to the temporal relation between the video frames, the error video frame will contain less information. This error video frame and the motion vectors are then transmitted, possibly after some additional coding. The motion estimation, motion compensation, subtraction and additional coding is further denoted by interframe encoding.
The interframe encoding can be limited to a part of a video frame. The interframe encoding is also not performed on the video frame as a whole but on pieces of the video frame. The video frame is divided into non-overlapping or even overlapping blocks. The blocks define a region in the video frame. The blocks can be of arbitrary shape. The blocks can be rectangular, triangular, hexagonal or any other shape, regular and irregular.
The blocks are thus also arrays of pixels but of smaller size than the video frame array. The interframe encoding operations are then performed on essentially all the blocks of the video frame. As the encoding of a video frame is performed with respect to a reference video frame, implicitly a relation is defined between the blocks of the video frames under consideration and the blocks of the reference video frame. Indeed the calculation of the sum of absolute differences or any other error norm will only be performed for a block of a video frame under consideration and blocks of the reference video frame which are nearby located. These locations are defined by the maximum length of the motion vector. These locations define a search area. These locations are defined by the minimum and maximum component values of the motion vector. In case of a pure translational motion model the minimum and maximum component values correspond to the search ranges. The resulting locations define the search area in the reference video frame.
Wavelets have proven to be successful in compressing still images. Compared to the classical DCT approach (JPEG), the wavelet-based compression schemes offer the advantage of a much better image quality obtained at very high compression ratios. Still image compression via the wavelet transform leads to graceful image degradation at increased compression ratios, and does not suffer from the annoying blocking artefacts, which are typical for JPEG at very low bit rates. Another advantage of wavelets over DCT is the inherent multiresolution nature of the transformation, so that progressive transmission based on scalability in quality and/or resolution of images comes in a natural way. These advantages can be efficiently exploited for sequences of video frames, especially in very low bit rate applications that can benefit from the improved image quality. Moreover, the progressive transmission capability is important to support variable channel bandwidths.
A straightforward approach to build a wavelet-based video codec, is to replace the DCT in a classical video coder by the discrete wavelet transform [Dufaux F., Moccagatta I. and Kunt M. “Motion-Compensated Generic Coding of Video Based on a Multiresolution Data Structure”. Optical Engineering, 32(7):1559-1570, 1993.][Martucci S., Sodagar I. and Zhang Y.-Q. “A Zerotree Wavelet Video Coder”. IEEE Trans. on Circ. and Syst. for Video Techn., 7(1):109-118, 1997.]. A drawback of this implementation is that for interframe encoding the wavelet transform is applied to the complete error video frame, which contains blocking artefacts. These artificial discontinuities, introduced in the motion vector field, lead to undesirable high-frequency subband coefficients that reduce the compression efficiency.
To avoid this limitation, the discrete wavelet transform is taken out of the temporal prediction loop which results in the video encoder depicted in
FIG. 1
[Zhang Y.-Q. and Zafar S. “Motion-Compensated Wavelet Transform Coding for Color Video Compression”. IEEE Trans. on Circ. and Syst. Video Techn., 2(3):285-296, 1992.]. Before the motion (ME) estimation and motion compensation (MC), the discrete wavelet transform (DWT) is calculated on the video frames, obtaining for each of the video frames an average subimage and detail subimages (FIG.
12
).
Both the motion estimation

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Method and system for video compression does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Method and system for video compression, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method and system for video compression will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3014940

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.