Image analysis – Color image processing
Reexamination Certificate
2000-10-13
2004-07-06
Johns, Andrew W. (Department: 2621)
Image analysis
Color image processing
C382S260000, C359S578000
Reexamination Certificate
active
06760475
ABSTRACT:
BACKGROUND OF THE INVENTION
1. Field of the Invention
The invention pertains to electronic color image acquisition, as well as digital image storage, retrieval and transmission.
2. Description of the Related Art
Digital images are normally captured in RGB color space, meaning that each pixel or point in the image is characterized by three values indicating the amount of red, green, and blue light present at that point. Typically, the filters used (either in a mosaic or in a sequential color system) partition the light energy into three bands according to wavelength, so the shortest wavelengths are recorded as blue, the middle range as green, and the longest range as red. Typically the blue range is 400-490 nm, green is 490-580 nm, and red is 590-660 nm. Then, the overall signal levels of red, green, and blue are adjusted to achieve a white balance. Often, color-correction matrix is applied to increase color separation, normally in the form of a non-diagonal 3×3 matrix that is multiplied by the raw [R, G, B] data for each pixel (expressed as a column vector), to produce the improved [R, G, B] output data for that pixel.
There are problems with this approach. First, the definition of the three primaries is not universal amongst all manufacturers and all types of equipment. Second, and more fundamental, the color basis used to acquire images is not colorimetric in nature. This means it is possible to have two objects in a scene which appear to have a different visual hue or brightness from one another, yet an RGB camera would record the objects as having the same RGB reading. Conversely, one can have two objects which present the same visual appearance to the eye, yet which would be recorded as having different RGB color values.
The reason for this problem is that the color response of the camera weights different wavelengths of light differently from how the human eye does. As a result, the ‘redness’ of an object recorded by the camera is not equivalent to the redness perceived by the eye, and similarly for other hues.
There is no way to retrieve the actual color from the digital image, once it has been captured in non-colorimetric terms. The loss of color information occurs at the time the image is recorded, due to the difference between the camera's color weighting and the human eye's color perception. No cameras or scanners in the prior art record the colors using the same color response that the human eye affords. As a result, color fidelity of digital images is lost at the time the image is captured.
The color error can be quantified for a given camera using measures such as the difference in L*a*b units between the color as recorded, and the true color of the object. A recent paper determined the relative spectral response of a Kodak DCS-200 and DCS420 camera, from which the color error may be determined for objects with various color spectra. In some cases, errors up to 20 L*a*b units are found, where 1 color unit represents the limit of human perceptibility.
Another problem occurs when one seeks to transform between an RGB color representation of a scene and other color representations such as cyan-yellow-magenta-black (CMYK) that is used in printing. This can lead to further degradation of color fidelity. Part of the reason for this is that the primaries in the other color representation space may not be well defined, i.e. the standard yellow, cyan, and magenta are not agreed-on. If there were agreement on this topic one might expect it would be possible to transform between various color representations by means of a simple 3×3 linear algebraic matrix without color fidelity loss, using techniques known in the art of color science. However, the use of a transformation matrix presumes that the original color representation weighted the various color components in the same fashion that the human eye does, or in some linear algebraic transform of this fashion. Since the RGB color image is ambiguous in the sense that the camera or scanner used to record it has a color response that is not the same as the color perception response of the eye, the color error in the original color space can be increased when transforming to another color space.
Another problem relates to the fact that the color error of different cameras is not the same, and further depends on the hues being captured. If the transformation matrix is optimized for a certain range of colors or wavelengths, other colors will not be transformed well. Similarly, a transformation that works adequately for one camera (with its attendant color error properties) may not work well for another camera, which has a different set of color errors. This has led to a profusion of ad hoc methods to ‘calibrate’ various cameras and scanners, as are evident in software packages such as Apple's Color Sync, Agfa's FotoTune.
There is a well-established field of colorimetry, described in standard texts such as MacAdam, Color Measurement, or Hunter and Harold, Measurement of Visual Appearance. It encompasses the specification of color, the human perception of hues and intensity, and the visual appearance of objects. However, there is at present no equipment or technique for the photographic or digital photographic recording of images in colorimetric terms. That is, while it is recognized that the principles of colorimetry should be applicable to every point in an image, the prior art equipment for measuring the colorimetric properties of light only records a single point at a time, or at most a line at a time. It would be possible, by adding scanning means and taking a sequence of line or point readings, to assemble a complete image with such equipment, but it is not practical except in a research laboratory environment. No practical system exists for recording an entire image in colorimetric terms, rapidly and with high spatial resolution.
Related to this, there are various measurement tools, including calorimeters, spectrometers, and the like, which are used to check the color of printed materials, and the appearance of luminous displays such as cathode-ray tubes (CRTs). Some of these devices are placed near or in contact with a CRT display, and its color is read by computer while various color signals are put to it. In this way, the color distortions and other properties of the display are learned and that information is used by color management software to correct for deficiencies in the display. Similar technology exists for printers, LCD displays and other graphic output devices. However, there is no quantitative basis for insuring end-to-end color management unless both the acquisition and display alike are put on a quantitative basis and given high fidelity. The present practice may be termed an open-loop approach, with control over only a portion of the process.
CRI (Boston, Mass.) makes a tunable filter termed the ‘VariSPEC’ which enables one to acquire an image at any specified wavelength. By using this filter to take many images that span the visible spectrum, multiplying the pixel intensity values of each image by the numerical value of the X colorimetric weighting function, and summing the reading of all images at each pixel, one can obtain the exact colorimetric value for the X response at each point in the image. The weighting and summing may then be repeated to obtain the Y and Z colorimetric values, at which point one has a high-resolution image of the scene with colorimetric color rendering.
Gemex (Mequon, Wis.) has made and marketed a gem-grading system which uses this approach to quantify the color of valuable gems, and to produce colorimetric-quality images. However, many exposures are required, typically 20 or more, from which the spectral data is extracted. The amount of data required the computing burden, and the time involved—approximately one minute per complete image—render this impractical for most uses.
Koehler, in U.S. Pat. No. 5,142,414 teaches a micro-mechanical etalon which purports to produce an electrically-variable optical transmission or reflect
Alavi Amir
Cambridge Research & Instrumentation Inc.
Cohen & Pontani, Lieberman & Pavane
Johns Andrew W.
LandOfFree
Colorimetric imaging system does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Colorimetric imaging system, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Colorimetric imaging system will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3203424