Image analysis – Image transformation or preprocessing – Combining image portions
Reexamination Certificate
2000-02-16
2004-01-06
Mehta, Bhavesh M. (Department: 2625)
Image analysis
Image transformation or preprocessing
Combining image portions
C345S426000
Reexamination Certificate
active
06674918
ABSTRACT:
TECHNICAL FIELD
This invention relates to image synthesis techniques and apparatus. More particularly, the invention concerns synthesizing an image from two or more images.
BACKGROUND
Image synthesis techniques refer to computer-implemented techniques that involve rendering an image, such as a digital image, from two or more other images. Although image synthesis can refer to and include techniques in computer graphics that deal with generating realistic images of geometric models (such as textured 3-dimensional objects represented in 2-dimensional space), at issue in the present case is image synthesis as such refers to image fusion, or the combining of two existing images to generate a third image.
Images that are generated by a computer using image fusion/combination techniques are typically realistic and/or artistic images with some special effect, such as texturing, that has been added in by virtue of a fusion/combination technique. A continuing challenge to those who design and use image synthesis systems is to provide rendered images that are realistic, meaningful, and reflect the creativity of the author who has rendered the image. Presently, however, known image synthesis techniques fall short of the goal of providing a truly versatile, realistic and meaningful rendering.
Exemplary techniques that are currently used include so-called “blending” techniques. Blending typically involves simply merging one image into another using any of a number of known merging techniques. Blending is one of the simplest methods of image combination or synthesis, but typically results in a rendered image that has no meaning, or even less meaning than the images from which it was rendered. For example, the merging of the two images may reduce the resolution, clarity or character of the resultant formed image. Needless to say, this is highly undesirable.
Other more complicated image synthesizing techniques do exist, but they continue to be plagued by problems associated with the quality and character of the images that are ultimately rendered by the techniques. In addition, many of these techniques are time-consuming and computationally complicated. Such image synthesizing techniques can include pixel-wise operations (e.g., simple color blending) and window operations (e.g., filtering operations) to name just a few. Among these particular techniques is a digital image synthesis method proposed by Porter and Duff in an article entitled
Compositing Digital Images,
SIGGRAPH'84, pp. 253-259 (1984). The proposed method generates a resultant image in a pixel-wise manner. According to the method, each pixel value of an original image should contain RGB (Red, Green, Blue) components, as well as an additional &agr; component. The &agr; component of the pixel is used to specify the transparence (diaphaneity) of the pixel. The result of the image fusion or synthesis technique is a linear combination of the images involved in the fusion/combination. All components (including the RGB components and the &agr; component) of the resulting pixels is a weighted average of corresponding components of the two images. The weights are linear functions of the &agr; components according to the combination types, which may be “A over B”, “A XOR B”, etc. The method is useful for computer animation, e.g., fade in-out and dissolve effects. Despite continuing work in the field, however, techniques such as the one described above, and others, still fall short of providing rendered images that are realistically artistic and that accurately reflect the intentions of their author.
Accordingly, this invention arose out of concerns associated with improving the techniques and apparatus associated with image synthesis.
SUMMARY
Methods and apparatus for synthesizing images from two or more existing images are described. The described embodiment makes use of an illumination model as a mathematical model to combine the images. A first of the images is utilized as an object color or color source (i.e. the foreground) for a resultant image that is to be formed. A second of the images (utilized as the background or texture) is utilized as a perturbation source. In accordance with the described embodiment, the first image is represented by a plane that has a plurality of surface normal vectors. Aspects of the second image are utilized to perturb or adjust the surface normal vectors of the plane that represents the first image. Perturbation occurs, in the described embodiment, by determining individual intensity values for corresponding pixels of the second image. The intensity values are mapped to corresponding angular displacement values. The angular displacement values are used to angularly adjust or deviate the surface normal vectors for corresponding image pixels of the plane that represents the first image. This yields a virtual surface whose normal vectors are not fully specified, but constrained only by the angles between the original surface normal vectors and the perturbed normal vectors. In the described embodiment, after some assumptions concerning the viewing and lighting source direction, an illumination model is then applied to the virtual surface to yield a resultant synthesized image.
REFERENCES:
patent: 4800539 (1989-01-01), Corn et al.
patent: 6057850 (2000-05-01), Kichury
patent: 6061065 (2000-05-01), Nagasawa
patent: 6226007 (2001-05-01), Brown
patent: 6407744 (2002-06-01), Van Overveld
Horn, Berthold Klaus Paul, “Shape From Shading; A Method for Obtaining the Shape of a Smooth Opaque Object From One View,” Submitted in Partial Fulfillment of the Reequirement for the Degree of Doctor of Philosophy at the Massachusetts Institute of Technology, Jun. 1970, pp. 1-196.
Phong, Bui Tuong, “Illumination for Computer Generated Pictures”, Communications of the ACM, Jun. 1975, vol. 18, No. 6, pp. 311-317.
Cook, et al., “The Reyes Image Rendering Architecture”, Computer Graphics, Jul. 1987, vol. 21, No. 4, pp. 95-102.
Witking, Andrew P., “Recovering Surface Shape and Orientation from Texture”, Artificial Intelligence, 1981, pp. 17-45.
Blinn, James, F., “Simulation of Wrinkled Surfaces”, pp. 286-292.
Liu Wen-Yin
Xu Ying-Qing
Zhong Hua
Lee & Hayes PLLC
Mehta Bhavesh M.
Microsoft Corporation
Patel Kanji
LandOfFree
Image synthesis by illuminating a virtual deviation-mapped... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Image synthesis by illuminating a virtual deviation-mapped..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Image synthesis by illuminating a virtual deviation-mapped... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3194698