Method of improved contour coding of image sequences, and...

Pulse or digital communications – Bandwidth reduction or expansion – Television or motion video signal

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C375S240150

Reexamination Certificate

active

06678325

ABSTRACT:

BACKGROUND INFORMATION
A context based arithmetic encoding CAE for binary shape coding of I (intra-frame) VOPs, P (unilaterally motion-compensated-predicted) VOPs and B (bidirectionally interpolated) VOPs (VOP=video object plane) is used in MPEG verification model version 8.0 (VM8.0), ISO/IEC JTC1/SC29/WG11, MPEG97/N1796.
FIG. 1
shows an example of a GOF (group of frames) structure in MPEG-1 standard for an image sequence composed of I-, P- and B-VOPs. VOPs coded exclusively by the intra-frame technique are labeled as I, unilaterally motion-compensated predicted VOPs are labeled as P, and the bidirectionally interpolated VOPs between them are labeled as B. The example according to
FIG. 1
shows four P-VOPs per GOF and two B-VOPs between I- and/or P-VOPs. The length of the GOF is not fixed, but instead is determined by the number of different image types. One essential feature of the CAE algorithm is that a chronologically following VOP (backward reference VOP) is always used as the reference value in inter-CAE shape coding of a current B-VOP, namely for both the binary shape reference and the shape mode reference. However, since B-VOPs of a random shape sometimes access only image contents of a chronologically preceding VOP as the reference (forward reference VOP), the binary shape of this B-VOP must be coded in the same manner as an I-VOP when using the current CAE algorithm. This leads to a considerable degradation of coding efficiency with such B-VOPs.
SUMMARY OF THE INVENTION
According to the present invention, coding efficiency can be improved in particular for shape coding in bidirectionally interpolated VOPs (B-VOPs). The method according to the present invention provides time-adaptive shape coding for B-VOPs. During a context-based shape coding, a B-VOP can access either a chronologically preceding I-VOP or P-VOP (forward reference VOP) or a chronologically following I-VOP or P-VOP (backward reference VOP) as the reference value for shape coding.
In the method according to the present invention no changes in syntax are necessary and thus the method can easily be used with existing standards such as the MPEG-4 standard.


REFERENCES:
patent: 5317397 (1994-05-01), Odaka et al.
patent: 5642166 (1997-06-01), Shin et al.
patent: 5978510 (1999-11-01), Chung et al.
patent: 6026195 (2000-02-01), Eifrig et al.
patent: 6057884 (2000-05-01), Chen et al.
patent: 6075576 (2000-06-01), Tan et al.
patent: 6148026 (2000-11-01), Puri et al.
patent: 6205260 (2001-03-01), Crinon et al.
patent: 6404813 (2002-06-01), Haskell et al.
patent: 0 577 337 (1994-01-01), None
patent: 0 880 286 (1998-11-01), None
patent: WO 97 29595 (1997-08-01), None
Ferman et al., “Motion and shape signatures for object-based indexing of MPEG-4 compressed video”, ICASSP-97, vol. 4, pp. 2601-2604, Apr. 1997.*
Brady N et al; “Context-Based Arithmetric Encoding of 2d Shape Sequences”, Proceedings. International Conference On Image Processing, Oct. 26, 1997, pp. 29-32.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Method of improved contour coding of image sequences, and... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Method of improved contour coding of image sequences, and..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method of improved contour coding of image sequences, and... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3194063

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.