Computer graphics processing and selective visual display system – Computer graphics processing – Graphic manipulation
Reexamination Certificate
2002-01-17
2004-06-29
Bella, Matthew C. (Department: 2676)
Computer graphics processing and selective visual display system
Computer graphics processing
Graphic manipulation
C345S422000, C345S423000, C345S581000, C345S609000, C345S647000, C382S173000, C382S300000
Reexamination Certificate
active
06756993
ABSTRACT:
BACKGROUND
The present invention is related to methods and apparatus for rendering images, and more particularly to methods and apparatus for rendering images using 3D warping techniques.
An important task in the field of computer graphics involves the creating (or rendering) of new images of a 3D scene using reference information that describes the scene. Rendering methods are generally classified into two groups: geometry based rendering (GBR), and image based rendering (IBR). Using GBR, 3D images are generated by projecting scene information as if being projected from various view positions. The projected scene information may include parameters such as geometric modeling data, surface properties, and lighting parameters of the scene.
Nearly all conventional computer graphics systems use some form of GBR to render 3D images. Using GBR, the data defining the 3D objects that comprise a particular scene are explicitly included in the graphics system, making it a relatively simple task for the graphics system to manipulate the scene objects. As such, GBR-based systems excel at data-manipulative tasks such as moving the desired viewpoint of a 3D scene, and simulating a phenomenon known as collision detection. GBR-based systems, however, have a limited ability to represent complex shaped objects, or objects that include micro-structure. As a result, it is difficult to construct a photo realistic virtual environment using GBR.
IBR provides a solution to this problem. Rather than defining a scene using geometric modeling data, IBR-based systems use actual scene images (or reference images), taken at various viewing positions, to render the desired 3D image. A typical IBR process flow is shown in FIG.
1
. Novel images (i.e., images different than the original reference images) are generated by first warping, or transforming the reference image data to the novel image space. The warped reference images are then blended together to render the desired 3D scene. It is unpractical to consider all the samples of the different reference images to render the novel image. Instead, a subset of samples must be determined that will suffice for rendering an image of sufficient quality. Ideally, the number of samples in such a set should depend only on the number of pixels in the desired image and not on the overall scene complexity.
IBR offers several advantages over GBR. First, IBR avoids the often tedious and time consuming task of modeling (or sampling) an object to form the modeling database. IBR-based systems are instead capable of directly accepting the captured image data of the various reference images into an image database. Second, the complexity of the IBR algorithms are generally independent of the complexity of the scene, allowing the viewpoint of complex 3D scenes to be changed interactively in real-time.
In addition to the reference image data, different approaches to rendering images using IBR may require additional information in order to adequately render the desired 3D scene. This additional information may include depth maps, viewing parameters, and correspondence information that interrelate the various reference image data. Typically, at least the additional depth information is needed for the warping process to produce acceptable results.
Although image-based rendering of 3D images by warping with depth information (IBRW) promises to produce images of much greater quality than GBR, until now, the only IBRW method that has approached this goal has been the so-called polygonal mesh method. Using the mesh method, reference images are first partitioned into a mesh of micro-triangles. After partitioning, the mesh is transformed (or warped) into a new image having the desired viewing position. The warped mesh is then fed to a polygon-rendering engine that renders the desired 3D image.
FIGS. 2A through 2D
illustrate the rendering of images using the mesh method. First, as shown in
FIG. 2A
, the reference image samples are warped into the desired image space. As the samples are warped to the desired image space, they move apart from one other leaving “gaps” of image information that must be filled. To fill in this information, the four neighboring samples are connected with two triangles as shown in FIG.
2
B. Once connected, the triangles are rasterized to create sub-samples between the warped image samples as indicated by the shading in FIG.
2
C. As other samples are warped and the corresponding triangles rasterized, the continuity of the surface must be maintained. This is illustrated in FIG.
2
D. On average, a “cost” of two triangles per sample may be assigned to the rendering method.
Using mesh IBRW systems, a high degree of image quality may be achieved through minute scan-conversion of the micro-triangles that comprise the mesh. Unfortunately, not all types of images may be rendered in this manner. For example, polygon-rendering produces unacceptable results when attempting to render multiple reference images, each having image data at redundant locations. Polygon-rendering of these multiple reference images causes the corresponding triangles at the redundant locations to interpenetrate and coincide. This, in turn, causes a flashing (or flickering) to occur in the image at the redundant locations as the image viewpoint is changed.
One solution to address the flashing problem is to pre-process the image data in order to build a single mesh. This eliminates any redundant triangles in the final mesh. The pre-processing, however, is not only difficult to perform, but can often be extremely time-consuming. The added delay needed to pre-process the mesh data can inhibit the ability to warp the image data and render novel images in real-time. In addition to the flashing problem, the setup up costs associated with the polygonal mesh approach using traditional polygonal rasterization limits the performance of mesh-based image rendering hardware.
FIGS. 3A through 3F
illustrate the steps involved in performing the traditional triangle rasterization process. The process begins by defining locations in the image plane where parameters for rasterization are to be evaluated. Typically, these locations are defined to be the pixel centers for the various rasterization triangles as shown in FIG.
3
A. In order to determine the parameter values at these particular image plane locations, a backward mapping from the desired-image plane to the surface modeled by the triangle as shown in
FIG. 3B
is needed. This backward mapping must be computed at setup, and can be quite time consuming and expensive, in terms of required computational power.
The time-consuming computations required to compute the backward mapping are illustrated in
FIGS. 3C through 3F
. The object of computing the mapping is to determine the corresponding parameter plane for each desired parameter. In the exemplary illustration shown in
FIG. 3C
, the desired parameter is z. The first step in calculation process is to compute the plane normal as the cross-product of the two difference vectors, P
2
−P
1
and P
3
−P
1
. This computation is shown in FIG.
3
D. The computed normal forms the plane equation n
a
x+n
b
y+n
c
z+D=0, shown in
FIG. 3E
, which is then used during rasterization to evaluate the parameter at the various pixel centers as shown in FIG.
3
F.
As an example of the number computations required to perform the rasterization, assume that it is desired to render an image having a targeted resolution of 1280×1024 pixels. On average, samples will be warped twice at the desired resolution. Also, recall that polygon rendering requires on average that two mesh triangles be rendered for every warped sample. Thus, the average number of triangles, N, that must be rendered per second in order to sustain a frame rate of 30 Hz is:
N≈
1280×1024×2×2×30≈157
M
triangles/sec
Conventional graphics hardware is incapable of achieving this level of computational performance. Indeed, it is believed that it will be years before such sustained levels of graphic
Eyles John
Lastra Anselmo
Popescu Voicu
Bella Matthew C.
Burns Doane Swecker & Mathis L.L.P.
Caschera Antonio A
The University of North Carolina at Chapel Hill
LandOfFree
Methods and apparatus for rendering images using 3D warping... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Methods and apparatus for rendering images using 3D warping..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Methods and apparatus for rendering images using 3D warping... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3311817