Computer graphics processing and selective visual display system – Computer graphics processing – Graphic manipulation
Reexamination Certificate
2002-01-18
2004-11-30
Tung, Kee M. (Department: 2676)
Computer graphics processing and selective visual display system
Computer graphics processing
Graphic manipulation
C345S505000
Reexamination Certificate
active
06825857
ABSTRACT:
FIELD OF THE INVENTION
The invention relates to the field of signal processing, with applications in computer graphics, and in particular to the 2D image processing field Broadly speaking, computer images, whether video or still images, are normally stored as pixel intensity values, usually in the form of digital information, in a succession of rows of pixel intensity values.
The invention relates particularly to image scaling of a digital image, for example, to produce a different output format and/or size and has many industrial applications, for example in real-time manipulation of an on-screen image (for instance to allow resizing in an arbitrary sized window) or transferring images to different output formats. The invention is particularly suitable for applications in video, broadcasting and HDTV.
DESCRIPTION OF THE PRIOR ART
The process of scaling an image generally consists of three steps: reading or capturing the input data, performing the transformation (by sampling and any necessary corrections) and storing the resultant image.
For an analogue input, a pixel representation of the image is usually obtained by sampling a continuous input signal associated with a real object (the signal could be an analogue output of a video camera or a mathematical representation of an object) at a specific sampling rate. This allows conversion of the continuous (analogue) signal into its discrete (digital) representation. Digital input signals may also be resampled further (possibly with a different sampling rate) to change the size and/or resolution of the images which they represent.
The problems of scaling an analogue or digital image can be perceived in the broader context of signal processing theory. The sampling procedure may lead to a loss of information contained in the image. Mathematically, the minimum sampling frequency at which the input signal must be sampled in order to retain all the frequencies contained within it is twice that of the highest frequency component present in the input signal. This sampling frequency is known as the Nyquist Frequency.
If the higher frequencies are undersampled (that is, the sampling is at too low a frequency) they will be misrepresented in the output as lower harmonics; this is known as aliasing. One way to eliminate aliasing is to increase the sampling frequency. Where this is not possible, the high frequencies that will be misrepresented must be removed from the input signal. This can be achieved by performing a Fourier Transform on the input signal, limiting the frequency spectrum to up to half of the Nyquist Frequency and performing the Inverse Fourier Transform to return to the spatial domain. However, in the case of real-time systems, performing a Fourier Transform may be computationally too time consuming.
Another way of removing the high frequency components is through the use of digital filters in the spatial domain. The term “digital filter” refers to a computational process or algorithm by which a digital signal or sequence of numbers (acting as an input) is transformed into a second sequence of numbers termed the output digital signal. There are two broad classes of such filters: Infinite Impulse Response (IIR) and Finite Impulse Response (FIR) filters. Both are well known.
In digital image scaling the general purpose of an FIR filter is to work out a weighted sum of contributions from source pixels to a target pixel.
The output of an FIR filter can be defined by the convolution of the filtering function (P) with the signal intensity function (I):
ξ
⁢
(
x
)
=
∑
t
=
-
Fw
1
/
2
Fw
1
/
2
⁢
⁢
I
⁢
(
x
-
t
)
·
F
⁢
(
t
)
⁢
dt
,
where Fw
1/2
represents half of the filter width expressed in pixel units. A convention may be adopted in which the filter is centred on the midpoint of the central pixel of its support range (the pixels it filters) and the total filter width Fw is therefore given as 2.FW
1/2
+1 (pixels). However, other conventions are equally valid.
Digital image scaling may be defined as (re)sampling of an input digital signal representing a digital image, possibly using a different sampling frequency from the original frequency to give a different resolution. The target may be smaller or larger than the original (or source) image and/or have a different aspect ratio. Downscaling (reduction of image size) gives a smaller target than source image and upscaling (increase of image size) gives a larger size.
There are many scaling methods available. The simplest and fastest scaling method is probably the pixel decimation/replication technique. Here, some of the original sampled pixels are simply omitted for downscaling and replicated for upscaling. The image quality produced is, however, often poor. Additional measures aimed at improving the image quality, such as replicating original samples prior to resampling, are often employed (U.S. Pat. No. 5,825,367). A possible problem with this approach is that it not only ignores any frequency consideration, which leads to presence of aliasing, but it also introduces other artifacts (image distortions) such as unwanted, often jagged lines and/or large blocks of equally coloured pixels in the image.
Partial improvement may be achieved through interpolation. In this technique, broadly speaking, rather than replicating source pixels to arrive at the additional pixel values during upscaling, there is interpolation between the values of two or more source pixel values (for example using higher-order polynomial interpolation). While aliasing artifacts are still likely to be present, the overall image quality is improved and the image is smoothed. Such smoothing may lead to a loss of contrast and the interpolated images often look blurry (U.S. Pat. No. 5,793,379). There are a number of possible refinements to interpolation in its simplest one-dimensional form. Probably the most advanced of these is three-dimensional interpolation as described, for example, in U.S. Pat. No. 5,384,904.
All the above approaches suffer from the same basic drawback: they do not provide high frequency adjustments and thus inevitably lead to the introduction of aliasing (and therefore artifacts).
As explained in previous paragraphs, the application of FIR filters removes this problem to some extent. However, although not always as computationally expensive as the Fourier transform, FIR filters still pose serious challenges for use in real-time environments. When implemented in hardware, FIR filters tend to occupy a large area of silicon in order to ensure that a sufficiently large number of sample points, or filter taps, is taken into account for computation. The FIR filter computes the value of the convolution of the filtering and the image intensity functions. The larger the number of sample points, the sharper the frequency cut-off of a filter and the smaller the spectrum of offending high frequencies passing through the filter. The number of points at which the convolution (i.e. the number of filter taps) has to be evaluated increases with the scaling ratio. Thus there is a threshold value above which the number of input pixels required for filter support exceeds the number of the taps available in silicon. To allow for higher scaling ratios, some method of limiting the number of input points or simulating wider filters using narrower ones must be implemented. An example of such an implementation, using decimating filters, can be found in U.S. Pat. No. 5,550,764. Unfortunately, as with all decimation, some of the input information is discarded and the quality of the output is thus degraded.
Software implementations do not exhibit these constraints, but due to potentially large amounts of input data required for generating a single output pixel, the performance of such implementations sometimes renders them unsuitable for real time processing.
The present invention aims to overcome or mitigate at least some of the disadvantages inherent in the prior art.
According to a first aspect of the invention there is provided a parallel processing method and system for scal
Clearspeed Technology Limited
Potomac Patent Group PLLC
Richer Aaron M.
Tung Kee M.
LandOfFree
Image scaling does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Image scaling, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Image scaling will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3356932