Up-sampling decimated color plane data

Image analysis – Image transformation or preprocessing – Changing the image coordinates

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C382S264000, C382S267000, C345S606000, C345S616000

Reexamination Certificate

active

06580837

ABSTRACT:

FIELD OF THE INVENTION
The invention generally relates to discrete sampled image data, and more particularly to up-sampling discrete sampled checkerboard decimated image data, where an interpolated pixel element (PEL) is determined according to neighboring discrete image samples.
BACKGROUND
Processing digitally captured input images is more accurate when separate color sensors are used to measure contributions of base colors used to represent color image data. Typical base colors include Red, Green, Blue (RGB), Cyan, Magenta, Yellow (CYM), or other color bases that can be combined to represent visible colors. To capture a particular input image, sensors attuned to particular base colors are used to identify each base color's contribution to the image. In a digital context, the sensors break the input image into discrete pixel elements (PELs), and the term “resolution” indicates the number of pixels available to a sensor for receiving input image color data. Common resolutions are 640×480, 800×600, 1600×1200, etc. If separate color sensors are available, then the entire resolution of one sensor is available to receive a particular color plane's input color data, e.g., that color's contribution to the input image.
Unfortunately, multiple sensors increase cost and complexity. The complexity arises from having to precisely split incoming light into the various base colors, so that the same pixel element of each sensor receives its appropriate portion of the incoming light defining the input image. (If the input light is incorrectly split, then color distortion and mis-registration blur results.) In addition, even if light is precisely split, the splitter unavoidably absorbs input light and reduces overall light intensity reaching each of the sensors. As the sensitivity of the individual sensors do not change with their application, the available light must be increased by a factor equal to the number of sensors used, or a loss of sensitivity to light must be accepted in the implementation. Additionally, more light is lost by subsequent color filters used to key a sensor to one of the base colors. Thus, to minimize these losses, large, heavy, optically precise and pure (e.g., more expensive) lenses are required to maximize available light for each sensor. The higher costs can make the image capture device prohibitively expensive, and the additional weight can reduce the breadth of application for such an implementation.
To reduce costs complexity, and weight, and also to better utilize the light available to sensors through inexpensive optics, capturing devices are instead being developed with a single sensor. With a single sensor, no light splitter is required, resulting in more light available to the sensor. Since there is one sensor, available sensor PELs must be assigned to receive one of the base colors, normally utilizing a regular pattern which divides the PELs among the desired color planes. Thus, for each base color, there will be gaps in the received color data for sensor pixels that have been assigned to measure a different base color. To compensate for these gaps, received image data is interpolated, or up-sampled, to guess missing image intensity values.
One well-known regular pattern decimation method is the BAYER technique (U.S. Pat. No. 3,971,065). This technique results in one color plane that is arranged in a checkerboard pattern, using 50% of the PELs, with the other two planes linearly decimated and using 25% of the PELs each. In U.S. Pat. No. 4,630,307, Cok then: teaches a pattern recognition method for up sampling the resultant data. These patents assume an RGB color scheme and teach sampling an image with the checker board, PELs dedicated to receiving green color intensities, while red and blue are assigned: the remaining linearly sampled sensor PELs (e.g., the sensor pixels define a grid having receptors keyed as GRGB . . . both horizontally and vertically from the leading G (see:
FIG. 6 of 3,971,065). The ratio accentuating green values corresponds to research indicating the human perception system is more attuned to changes in green values, and therefore a disproportionate amount of the pixel data needs to be directed towards receiving green image values.
To determine missing pixel values, Cok teaches assigning an unknown pixel T a value derived from contributions from values in an immediately surrounding pixel neighborhood. For example, consider a 3×3 pixel sequence from an image sensor's green components:
In Cok, the unknown pixel T is assigned a value expressed as a combination of known, pixels B
1
-B
4
. The exact contribution of each B is decided by averaging B
1
-B
4
, and then comparing the average against each B value to determine a binary pattern of high-low values based on the comparison. The pattern is looked up in a table which maintains a concordance between particular patterns and classifies the pattern found as either a stripe, corner or edge. The contributions to be ascribed to each neighborhood pixel are then defined as one of three specific formulas for computing T.
A significant problem with this and related techniques, however, is that T is being defined with respect to a very limited context area. Consequently, when analyzing ambiguous input data, or high-detail image data, the guessed value may be sub-optimal. As the resolution of today's sensors increase, the differences between adjacent PELs is changed. When combined with a consumer's expectation of a sharper image, and the lower cost of applying complex algorithms, the technique described by Cok is no longer satisfactory.
SUMMARY
In one embodiment of the invention, a method is disclosed for determining a value for an unknown pixel T. An immediate neighborhood of pixel values and an extended neighborhood of pixels values are selected. An average pixel value of the immediate neighborhood of pixel values is computed. The average pixel value is compared to the immediate and extended neighborhood pixel values, and a binary pattern determined based on the comparing. The binary pattern is reduced and a set of coefficients identified for the reduced binary pattern.


REFERENCES:
patent: 4630307 (1986-12-01), Cok
patent: 4843631 (1989-06-01), Steinpichler et al.
patent: 5040064 (1991-08-01), Cok
patent: 5187755 (1993-02-01), Aragaki
patent: 5339172 (1994-08-01), Robinson
patent: 5384643 (1995-01-01), Inga et al.
patent: 5420693 (1995-05-01), Horiuchi et al.
patent: 5739841 (1998-04-01), Ng et al.
patent: 5799112 (1998-08-01), de Queiroz et al.
patent: 5859920 (1999-01-01), Daly et al.
patent: 5875268 (1999-02-01), Miyake
patent: 5990950 (1999-11-01), Addison
patent: 6125212 (2000-09-01), Kresch et al.
patent: 6332030 (2001-12-01), Manjunath et al.
patent: 6343159 (2002-01-01), Cuciurean-Zapan et al.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Up-sampling decimated color plane data does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Up-sampling decimated color plane data, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Up-sampling decimated color plane data will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3159824

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.