Apparatus for integrated cascade encoding

Coded data generation or conversion – Digital code to digital code converters – Adaptive coding

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C341S050000, C375S240140

Reexamination Certificate

active

06788227

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention generally relates to digital encoding of images and, more particularly, to encoding, with compression, of sequences of images to be reproduced in rapid succession to produce the illusion of motion, such as for digital transmission of motion pictures or animated graphics.
2. Description of the Prior Art
For purposes of communication, digital signalling is currently much preferred to analog signalling in most environments and applications. Consequently, communications infrastructure is rapidly being converted to carry digital signals. Reasons supporting such a strong preference are much increased bandwidth and transmission capacity, decreased susceptibility to noise and the possibility of strong error correction to compensate for transmission losses. Accordingly, it is now possible to transmit relatively massive amounts of data economically and in short periods of time.
One such application which is rapidly becoming familiar and a source of substantial economic interest is the digital transmission of pictorial images and graphics. In particular, the transmission of images at high data rates sufficient to achieve the illusion of motion such as is encountered in animated graphics and motion pictures is now commercially feasible and coming into relatively widespread use. However, to do so, a sequence of images must be presented at rates above the so-called flicker fusion frequency of human visual perception, generally accepted as being about twenty-four to thirty images per second.
Further, digital image data must contain a very large amount of information to achieve good image quality and fidelity. The amount of data in a single image may contain several million image points or “pixels”, each of which must be encoded to represent fine gradations of both color and intensity. Thus it can be seen that even a single, very short sequence of digitized motion picture could require the equivalent of billions of bytes of data to be transmitted and/or stored.
In order to accommodate such massive amounts of information with commercially available and sufficiently inexpensive hardware to be used by persons desiring such information or the general public at large and to efficiently and economically utilize the communication infrastructure, it is necessary to reduce the volume of data by compression. Several standards for image data compression have been proposed and widely adopted. Among the more well-accepted standards for compression of image data are the JPEG (Joint Photographic Experts Group) standard and the MPEG (Motion Picture Experts Group) standard, both of which are known in several versions at the present time.
The JPEG standard allows optimal resolution and fidelity to be maintained for any arbitrary degree of data compression and compression by a factor of twenty or more often does not result in loss of image quality or fidelity which is generally perceptible. The MPEG standard is similar to the JPEG standard in many aspects but also allows redundancy of portions of the image from frame to frame to be exploited for additional data compression. This process is enhanced by different encoding and decoding techniques being applied for independent frames (I-frames) which are compressed independently of data in other temporally proximate frames, interpolated frames (P-frames) compressed in terms of changes from a preceding I or P frame and frames which are bidirectionally interpolated (B-frames) between preceding and following I or P frames.
The high degree of compression with minimal loss of fidelity is enhanced in accordance with these and other standards by providing flexibility of coding in dependence upon image content. A powerful concept in this process is entropy coding; so-called because, in a manner somewhat parallel to the concept of entropy in the more familiar thermodynamic context, it represents a measure of the disorder within the image as a metric for assignment of particular codes to particular image values on the well-founded assumption that less common values contain greater amounts of information justifying greater numbers of bits and that more common image values contain relatively less information and can (and should) be represented by smaller numbers of bits in the coded data. However, to determine how image data values in a given image (or portion thereof since coding tables can be changed within an image) are encoded, it is necessary to accumulate statistics concerning the image values in an image before code values can be analyzed and efficient code assigned to respective values. In other words, a substantial portion of the encoding process must be completed and the results analyzed before it can be known which codes can be most efficiently assigned to image values representing regions within the image.
(As a matter of terminology, it will be understood by those skilled in the art that “pixel values” such as luminance and chrominance of the individual pixels of the image are transformed in groups, called macroblocks, by an orthogonal transform process such as a discrete cosine transformation to yield values which represent the image in terms of spatial frequency and which are referred to herein as “image values”. This processing has the effect of providing image values which may have a reduced number of significant bits and which may often be reduced or zero bits removed by truncation without perceptible reduction in image fidelity since human visual perception is relatively less sensitive to high spatial frequencies. At the same time, image values representing low spatial frequency, to which the human eye is also somewhat insensitive, may be more common but represented by fewer bits through entropy encoding. However, the particular preprocessing is not important beyond the fact that substantial preprocessing must be performed and the results analyzed before the details of a relatively optimal encoding process can be determined.)
In the past, it has been the practice to perform encoding in a pipelined fashion with each discrete processing step being performed on the results of a preceding step. However, this approach may require a process to be performed for an entire frame before a following process can be started and thus introduces latency in the data which may cause synchronization problems. Encoders adequate for television data rates (which are of lower resolution than may be desired) and using pipelined architectures have been developed and are currently available but exhibit such latency and may cause such synchronization problems, particularly where the encoding requires extra bits to be used or quantization table(s) to be changed; both of which increase the number of bits which must be transmitted. However, conditions such as extra bits and frequent changes of quantization tables are more likely to occur when increased image quality, fidelity and/or resolution is required.
Preprocessing of the image values is thus often used to predict encoding options for optimized picture quality. Since encoder output provides the most accurate information concerning the image content, encoders can be used as preprocessors. Cascade encoding using a plurality of encoders in stages has been used to improve picture quality. The silicon/chip size, circuit power and evenness of picture quality depends oh the amount of information and output statistics that are provided to the second encoding stage and, in such an environment, first stage encoder/preprocessor statistics must be extracted and collected from the first stage encoder and then converted to the host interface data format and fed to the second stage encoder. Such a system is often referred to as a two-pass system and supports use of image value statistics for choice of encoding options on the same frame (as distinct from a so-called one-pass system which uses statistics from one frame for coding of a following frame for which they may not be optimal or even appropriate and which thus cannot optimize encoding of any frame or field based on

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Apparatus for integrated cascade encoding does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Apparatus for integrated cascade encoding, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Apparatus for integrated cascade encoding will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3265587

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.