Method and system for encoding and decoding moving and still...

Pulse or digital communications – Bandwidth reduction or expansion – Television or motion video signal

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C375S240100

Reexamination Certificate

active

06621865

ABSTRACT:

CROSS-REFERENCE TO RELATED APPLICATIONS
BACKGROUND
1. Field of the Invention
The present invention relates generally to video encoding and decoding for moving and still pictures and more specifically to multi-dimensional-scalable video compression and decompression of high resolution moving and still pictures.
2. Description of the Related Art
As high definition television begins to make its way into the market, the installed base of existing television systems and video storage systems that operate at reduced definition must not be ignored. To address the complex problem of different resolutions and standards several techniques are available. One of these techniques, scalable video coding, provides for two or more resolutions simultaneously in the video coding scheme to support both the installed base of standard resolution systems and new systems with higher resolution.
One scalable video coding technique is spatial scalability, which seeks to provide two or more coded bit streams that permit the transmission or storage of a lower resolution and a higher resolution image. One stream, a lower resolution encoded image stream, contains the lower resolution image data and the other stream, an encoded difference image stream, contains the data needed for forming a higher resolution image when combined with the lower resolution image. An encoded image stream is a time sequence of frame pictures or field pictures, some of which may be difference frames, that are encoded in accordance with a particular standard such as JPEG, MPEG-1, MPEG-2 or MPEG-4 or other similar standard. A source image stream is a time-ordered sequence of frame pictures or field pictures F
1
-F
n
, each containing a number of pixel blocks, that are presented to an encoder for coding or generated from a decoder for viewing.
FIG. 1
shows a standard MPEG-2 encoder
10
, which is modified in
FIG. 3
to become a spatial scalable codec. In
FIG. 1
, the standard encoder
10
has a first adder (subtractor)
12
that receives an input frame sequence F
n
and a predicted frame sequence P′
n
and forms the difference between the two, (F
n
−P′
n
). A discrete cosine transform (DCT) coder
14
next transforms the difference (F
n
−P′
n
) into the frequency domain to generate (F
n
−P′
n
)
T
. A quantizer (Q)
16
receives the difference and quantizes the difference values to generate (F
n
−P′
n
)
TQ
and a variable length coder
18
(VLC) entropy encodes the result to create the output bit stream (F
n
−P′
n
)
TQE
.
To generate a predicted frame sequence P′
n
, a local decoder loop is used (where primed symbols indicate a decoded or reconstructed signal). The predicted frame P′
n
can be either a forward or a forward and backward predicted frame. The local decoder starts at an inverse quantizer (IQ)
20
which receives the (F
n
−P
n
)
TQ
to form a sequence of transformed difference frames (F′
n
−P′
n
)
T
. An inverse DCT coder
22
receives the transformed difference frames (F′
n
−P′
n
)
T
and generates the original (F′
n
−P′
n
) difference sequence following which a second adder
24
sums the original difference sequence (F′
n
−P′
n
) with the predicted frame P′
n
causing the output of the adder to generate a reconstructed original frame sequence F′
n
. A frame store (FS) captures the recovered frame sequence F′
n
and produces a delayed frame sequence F′
n−1
. Motion Estimator (ME) block
28
receives the original frame sequence F
n
and the delayed frame sequence F′
n−1
from the local decoder loop and compares the two to estimate any motion or change between the frame sequences in the form of displaced blocks. ME generates a motion vector mVn which stores information about the displacement of blocks between F′
n
and F′
n−1
. A motion compensation predictor (MCP)
30
receives the motion vectors and the delayed frame sequence. F′
n−1
and generates the predicted frame P′
n
which completes the loop.
The encoding process starts without any initial prediction, i.e., P′
n
=0, which permits the frame store FS
26
to develop a first stored frame F′
n
=F′
n−1
. On the next input frame, a prediction P′
n
is made by the MCP
30
and the encoder begins to generate encoded, quantized, transformed and motion compensated frame difference sequences.
FIG. 2
shows an MPEG-2 decoder
32
. The decoder is similar to the local decoder loop of the encoder in FIG.
1
. The encoded bit stream (F
n
−P′
n
)
TQE
and encoded motion vectors are decoded by the IVLC block
34
. The motion vectors are sent directly from the IVLC block to the motion compensation prediction block (MCP)
36
. The transformed and quantized image stream is then inverse quantized by the IQ block
38
and then transformed back to the time domain by the IDCT block
40
to create the reconstructed difference image stream (F′
n
−P′
n
). To recover a representation of the original image stream F′
n
, the predicted frames P′
n
must be added, in the summation block
42
, to the recovered difference image stream. These predicted frames P′
n
are formed by applying the recovered motion vectors, in a motion compensation prediction block, to a frame store
44
which creates a F′
n−1
stream from the original image stream F
n
. To get the decoder started, an image stream without P′
n
is decoded. This allows the frame store to obtain the F′
n
image and to store it for use in subsequent predictions.
FIG. 3
shows a prior art system
48
for encoding an image stream with spatial-scalable video coding. This system includes a spatial decimator
50
that receives the source image stream and generates a lower resolution image stream from the source image stream, a lower layer encoder
52
that receives the lower resolution image stream and encodes a bit stream for the lower layer using an encoder similar to that of
FIG. 1
, a spatial interpolator
54
that receives a decoded lower layer image stream from the lower layer encoder and generates a spatially interpolated image stream and an upper layer encoder
56
, similar to that of
FIG. 1
, which receives the source image stream and the spatially interpolated image stream to generate the upper layer image stream. Finally, a multiplexor
58
is included to combine the lower and upper layer streams into a composite stream for subsequent transmission or storage.
The spatial decimator
50
reduces the spatial resolution of a source image stream to form the lower layer image stream. For example, if the source image stream is 1920 by 1080 luminance pixels, the spatial decimator may reduce the image to 720 by 480 luminance pixels. The lower layer encoder
52
then encodes the lower resolution image stream according to a specified standard such as MPEG-2, MPEG-4 or JPEG depending on whether motion or still pictures are being encoded. Internally, the lower layer encoder
52
also creates a decoded image stream and this image stream is sent to the spatial interpolator
54
which approximately reproduces the source video stream. Next, the upper layer encoder
56
encodes a bit stream based on the difference between source image stream and the spatially interpolated lower layer decoded image stream, or the difference between the source image stream and a motion compensated predicted image stream derived from the upper layer encoder or some weighted combination of the two. The goal is to choose either the motion compensated predicted frames or the spatially interpolated frames (or a weighted combination thereof) to produce a difference image stream that has the smallest error energy.
A spatial-scalable system, such as above, can offer both a standard television resolution of 720 by 480 pixels and a high definition resolution of 1920 by 1080 pixels. Also, scalability coding has other desirable charac

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Method and system for encoding and decoding moving and still... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Method and system for encoding and decoding moving and still..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method and system for encoding and decoding moving and still... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3109027

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.