Computer graphics processing and selective visual display system – Computer graphics processing – Three-dimension
Reexamination Certificate
1999-05-11
2004-06-15
Nguyen, Phu K. (Department: 2671)
Computer graphics processing and selective visual display system
Computer graphics processing
Three-dimension
Reexamination Certificate
active
06750860
ABSTRACT:
BACKGROUND
1. Technical Field
The invention is related to an image-based rendering system and process for rendering novel views of a 3D scene, and more particularly, to a system and process for rendering these novel views using concentric mosaics.
2. Background Art
Traditional rendering approaches often involve constructing or recovering a complete geometric and photometric model of a 3D scene. These approaches are attractive as they allow graphics elements (such as new lights, shadows, etc.) to be readily added to the model. However, these approaches are also complex and computationally intensive. In many cases, it is desired to render views of a 3D scene more simply without the complexity and excessive processing requirements of a geometric/photometric rendering approach. If this is the case, a different method of rendering a view of the scene could be taken. Specifically, an image-based rendering process could be employed. One such image-based rendering approach foregoes the need for a geometric/photometric model of a scene, and instead uses Adelson and Bergen's plenoptic function [AB91] to describe the scene. The original Adelson and Bergen work defined a 7D plenoptic function as the intensity of light rays passing through the camera center at every location (V
x
, V
y
, V
z
) at every possible angle (&thgr;, &phgr;), for every wavelength &lgr;, at every time t, i.e.,
P
7
=P
(
V
x
,V
y
,V
z
,&thgr;,&phgr;,&lgr;,t
) (1)
It is also possible to add light source directions in the plenoptic function. [WHON97].
In recent years a number of image-based rendering techniques have been proposed to model and render real or synthetic scenes and objects based on simplifying the plenoptic function. For example, techniques have been proposed to construct a complete 5D plenoptic function by McMillan and Bishop [MB95]:
P
5
=P
(V
x
,V
y
,V
z
,&thgr;,&phgr;) (2)
In this work two of the variables in the original equation are dropped, namely time t (therefore assuming a static environment) and light wavelength &lgr; (hence assuming a fixed lighting condition). In addition, this work introduced the concept of interpolating plenoptic functions from a few discrete samples using cylindrical panoramas.
A 4D parameterization of the plenoptic function has also been proposed. It was observed in both the Lightfield [LH96] and Lumigraph [GGSC96] systems that by staying outside a convex hull or bounding box of an object, the 5D complete plenoptic function can be simplified to a 4D lightfield plenoptic function, i.e.,
P
4
=P
(
u,v,s,z
) (3)
where (u, v) and (s, z) parameterize two bounding planes of the convex hull. The reverse is also true if camera views are restricted inside a convex hull.
There has even been 2D simplifications proposed (e.g., cylindrical panoramas [Chen95] and spherical panoramas [SS97]) where the viewpoint in the scene is fixed and only viewing directions can be changed, i.e.,
P
2
=P
(&thgr;,&phgr;) (4)
The 2D embodiment of the plenoptic function is the easiest to construct. However, the 2D parameterization of the plenoptic function does not allow novel views from different viewpoints within the scene to be rendered. It would be possible to render novel views using the 5D or 4D embodiments of the plenoptic function. It is, however, very difficult to construct a 5D complete plenoptic function [MB95, KS96] because the feature correspondence is a difficult problem. In addition, while the 4D embodiment is not too difficult to construct because of its unique parameterization, the resulting lightfield is large even for a small object (therefore small convex hull). This presents problems when trying to render novel views, and to date a real-time walk-through a real 3D scene has not been fully demonstrated.
It is noted that in the preceding paragraphs, as well as in the remainder of this specification, the description refers to various individual publications identified by an alphanumeric designator contained within a pair of brackets. For example, such a reference may be identified by reciting, “reference [AB91]” or simply “[AB91]”. Multiple references will be identified by a pair of brackets containing more than one designator, for example, [PH97, W+97, RG98, RPIA98, GH97]. A listing of the publications corresponding to each designator can be found at the end of the Detailed Description section.
SUMMARY
The invention is directed toward an image-based rendering system and process for rendering novel views of a real or synthesized 3D scene based on a series of concentric mosaics depicting the scene.
In one embodiment of the present invention, each concentric mosaic represents a collection of consecutive slit images of the surrounding 3D scene taken in a direction tangent to a viewpoint on a circle on a plane within the scene. The mosaics are concentric in that the aforementioned circles on the plane are concentric. This system and process is based on a unique 3D parameterization of the previously described plenoptic function that will hereafter be referred to as concentric mosaics. Essentially, this parameterization requires that the concentric mosaics be created by constraining “camera” motion on concentric circles in the circle plane within the 3D scene. Once created, the concentric mosaics can be used to create novel views of the 3D scene without explicitly recovering the scene geometry.
Specifically, each concentric mosaic is generated by capturing slit-shaped views of the 3D scene from different viewpoints along one of the concentric circles. Each captured slit image is uniquely indexed by its radial location (i.e., which concentric mosaic it belongs to) and its angle of rotation (i.e., where on the mosaic). The radial location is the radius of the concentric center on which the slit image was captured, and the angle of rotation is defined as the number of degrees from a prescribed beginning on the concentric circle to the viewpoint from which the slit image was captured. The height of the slit image reflects the vertical field of view of the capturing camera. Preferably, slit images are captured at each viewpoint on the concentric circle in both directions tangent to the circle. If so, the unique identification of each slit image also includes an indication as to which of the two tangential directions the slit image was captured.
Novel views from viewpoints on the aforementioned circle plane are rendered using the concentric mosaics composed of slit images. Specifically, the viewpoint of a novel view may be anywhere within a circular region defined by the intersection of the outermost concentric mosaic with the circle plane. The novel view image consists of a collection of side-by-side slit images, each of which is identified by a ray emanating from the viewpoint of the novel view on the circle plane and extending toward a unique one of the slit images needed to construct the novel view. Each of the rays associated with the slit images needed to construct the novel view will either coincide with a ray identifying one of the previously captured slit images in the concentric mosaics, or it will pass between two of the concentric circles on the circle plane. If it coincides, then the previously captured slit image associated with the coinciding ray can be used directly to construct part of the novel view. However, if the ray passes between two of the concentric circles of the plane, a new slit image is formed by interpolating between the two previously captured slit images associated with the rays originating from the adjacent concentric circles that are parallel to the non-coinciding ray of the novel view. This interpolation process preferably includes determining the distance separating the non-coinciding ray from each of the two adjacent parallel rays. Once these distances are determined, a ratio is computed for each slit image associated with the parallel rays based on the relative distance betwe
He Li-Wei
Ke Qifa
Shum Heung-Yeung
Klarquist & Sparkman, LLP
Microsoft Corporation
Nguyen Phu K.
LandOfFree
Rendering with concentric mosaics does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Rendering with concentric mosaics, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Rendering with concentric mosaics will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3330469