Computer graphics processing and selective visual display system – Computer graphics processing – Three-dimension
Reexamination Certificate
2000-07-19
2004-07-06
Bella, Matthew C. (Department: 2671)
Computer graphics processing and selective visual display system
Computer graphics processing
Three-dimension
C345S422000, C345S426000, C345S427000, C345S582000
Reexamination Certificate
active
06760024
ABSTRACT:
BACKGROUND OF THE INVENTION
1. Field of the Invention
This invention relates to the field of computer graphics, and, more specifically, to graphical rendering of shadows.
2. Background Art
In computer graphics, images are often created from three-dimensional objects modeled within a computer. The process of transforming the three-dimensional object data within the computer into viewable images is referred to as rendering. Single still images may be rendered, or sequences of images may be rendered for an animated presentation. One aspect of rendering involves the determination of lighting effects on the surface of an object, and in particular, the accurate representation of shadows within the rendered image. Unfortunately, typical shadow rendering techniques do not satisfactorily support rendering of finely detailed elements, such as fur or hair. Also, because surfaces are generally classified as either “lit” or “unlit,” shadows from semitransparent surfaces and volumes, such as fog, cannot be accurately represented. To illustrate these problems with known shadowing techniques, a general description of image rendering is provided below with reference to a common method for rendering shadows known as “shadowmaps.”
Image Rendering
Typically, rendering is performed by establishing a viewpoint or viewing camera location within an artificial “world space” containing the three-dimensional objects to be rendered. A “view plane,” comprising a two-dimensional array of pixel regions, is defined between the viewing camera location and the objects to be rendered (also referred to herein as the “object scene”). To render a given pixel for an image, a ray is cast from the viewing camera, through the pixel region of the view plane associated with that pixel, to intersect a surface of the object scene. Image data associated with the surface at that point or region is computed based on shading properties of the surface, such as color, texture and lighting characteristics. Multiple points, sampled from within a region of the object scene defined by the projection of the pixel region along the ray, may be used to compute the image data for that pixel (e.g., by applying a filtering function to the samples obtained over the pixel region). As a result of rendering, image data (e.g., RGB color data) is associated with each pixel. The pixel array of image data may be output to a display device, or stored for later viewing or further processing.
In photorealistic rendering, as part of the determination of lighting characteristics of a point or points on a surface, shadowing effects are considered. That is, a determination is made of whether each light source in the object scene contributes to the computed color value of the pixel. This entails identifying whether the light emitted from each light source is transmitted unoccluded to the given point on the surface or whether the light is blocked by some other element of the object scene, i.e., whether the given point is shadowed by another object. Note that a light source may be any type of modeled light source or other source of illumination, such as the reflective surface of an object.
An example of a rendering scenario is illustrated in the diagram of
FIG. 1. A
camera location
100
(or viewpoint) is identified adjacent to an object scene comprising objects
104
and
105
. A light source
101
is positioned above the object scene such that object
104
casts shadow
106
upon the surface of object
105
. Camera location
100
and light source
101
have different perspectives of the object scene based on their respective locations and view/projection direction. These differing perspectives are shown in
FIG. 1
as separate coordinate systems (x, y, z) and (x′, y′, z′), respectively. For the rendering operation, a view plane
102
is positioned between the camera location and the object scene. View plane
102
is two-dimensional in x and y with finite dimensions, and comprises an array of pixel regions (e.g., pixel regions
103
A and
103
B). Each pixel region corresponds to a pixel of the output image.
To sample the object scene for pixel region
103
A, ray
107
A is projected from camera location
100
, through pixel region
103
A, onto surface
105
at sample point
108
A. Similarly, for pixel region
103
B, ray
107
B is traced from camera location
100
, through pixel region
103
B, onto surface
105
at sample point
108
B. The surface properties at the sample point are evaluated to determine the image data to associate with the corresponding pixel. As part of this evaluation, the rendering process determines whether the sample point is lit or shadowed with respect to each light source in the scene.
In the example of
FIG. 1
, sample point
108
A lies within shadow
106
cast by object
104
, and is therefore unlit by light source
101
. Thus, the surface properties evaluated for sample point
108
A do not consider a lighting contribution from light source
101
. In contrast, sample point
108
B is not shadowed. The surface properties evaluated for sample point
108
B must therefore account for a lighting contribution from light source
101
. As previously indicated, multiple samples may be taken from within each projected pixel region and combined within a filter function to obtain image data for the corresponding pixel. In this case, some samples may lie within a shadow while other samples within the same pixel region are lit by the light source.
Shadow Maps
To improve rendering efficiency, the process of determining shadows within an object scene may be performed as part of a separate pre-rendering process that generates depth maps known as “shadow maps.” A later rendering process is then able to use simple lookup functions of the shadow map to determine whether a particular sample point is lit or unlit with respect to a light source.
As shown in
FIG. 2
, a shadow map is a two-dimensional array of depth or z-values (e.g., Z
0
, Z
1
, Z
2
, etc.) computed from the perspective of a given light source. The shadow map is similar to an image rendered with the light source acting as the camera location, where depth values are stored at each array location rather than pixel color data. For each (x,y) index pair of the shadow map, a single z value is stored that specifies the depth at which the light emitted by that given light source is blocked by a surface in the object scene. Elements having depth values greater than the given z value are therefore shadowed, whereas elements that have depth values less than the given z value are lit.
FIG. 3
illustrates how a shadow map is created for a given light source (represented herein as a point source for sake of illustration). Where multiple light sources are present, this technique is repeated for each light source. A finite map plane
300
is positioned between a light source
301
and the object scene comprising surface
302
. Map plane
300
represents the two-dimensional (x,y) array of sample points or regions. A ray
305
is cast from light source
301
through sample region
303
to find a point
304
on surface
302
that projects onto the sample region. Sample point
304
is selected as the point on the first encountered surface (i.e. surface
302
) that projects onto the sample region. For sample point
304
, the z value (Z
MAP
) is determined in the light source's coordinate system and stored in the shadow map at the (x,y) location corresponding to the sample region
303
of map plane
300
. Objects that intersect the sample region are considered fully lit (value of “1”) for z values less than Z
MAP
(i.e., objects in front of surface
302
), and considered completely unlit (value of “0”) for z values greater than Z
MAP
(i.e., objects behind of surface
302
).
A process for creating a shadow map known as ray casting is shown in FIG.
4
. As shown, a sample location in the shadow map is selected for pre-rendering in step
400
. In step
401
, the pre-rendering process traces a ray from the light source location through the corresponding sample region of the map plane
Lokovic Thomas David
Veach Eric Hugh
Bella Matthew C.
Nguyen Kimbinh T.
Pixar
LandOfFree
Method and apparatus for rendering shadows does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Method and apparatus for rendering shadows, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method and apparatus for rendering shadows will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3216192