Incremental frustum-cache acceleration of line integrals for...

Computer graphics processing and selective visual display system – Computer graphics processing – Three-dimension

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

Reexamination Certificate

active

06677947

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates generally to depiction of objects through the use of a computer and more specifically to the depiction of objects in computer animation. The present invention is also directed to a technique for efficiently computing effects of participating media, such as volumetric effects and atmospheric effects, in the context of a high-quality renderer, such as a Reyes-based renderer or a ray tracer.
2. Description of the Related Art
Traditional animation techniques allow the animator to create the apparent motion of animated characters, to be viewed by a viewer of the animation. The use of computers to simplify the animation process has provided many benefits to the traditional hand-drawn process. Computer animated scenes are well known in the prior art and have been used in many different capacities. Such animation is utilized in traditional movies, videos and online streaming of moving pictures, as well as interactive movies where the motion of characters is often initiated by a user.
In computer graphics, an image can be created from three-dimensional objects modeled within the computer. The process of transforming the three-dimensional object data within the computer into viewable images is referred to as rendering. Single still images may be rendered, or sequences of images may be rendered for an animation presentation.
Typically, rendering is performed by establishing a viewpoint of a viewing camera location
10
within an artificial “world space” containing the three-dimensional objects to be rendered. This is illustrated in
FIG. 1. A
“view plane,” comprising a two-dimensional array of pixel regions, is defined between the viewing camera location and the objects to be rendered (also referred to herein as the “object scene”) To render a given pixel for an image, a ray is cast from the viewing camera
10
, through the pixel region of the view plane associated with that pixel, to intersect a surface of the object scene
12
. Image data associated with the surface at that point or region is computed based on shading properties of the surface, such as color, texture and lighting characteristics. Multiple points, sampled from within a region of the object scene defined by the projection of the pixel region along the ray, may be used to compute the image data for that pixel (e.g., by applying a filtering function to the samples obtained over the pixel region). As a result of rendering, image data (e.g., RGB color data) is associated with each pixel. The pixel array of image data may be output to a display device, or stored for later viewing or further processing.
One effect that is often desirable in an animation scene is the depiction of atmospheric effects such as fog or smoke. A Reyes image rendering architecture is often used to provide fast high-quality rendering of a scene. (See, “The Reyes Image Rendering Architecture”, R. L. Cook et al. Computer Graphics, Vol. 21, No. 4, 1987). While the Reyes algorithm is primarily designed to resolve visibility and appearance of surfaces, it also provides a framework for computing atmospheric effects such as fog or smoke. At each-surface point being shaded, illustrated in
FIG. 1
, the contribution of the atmosphere can be computed (usually as definite line integrals of scattering and absorption) and combined with the surface shading value. Computing these line integrals generally involves sampling many points along each line, and can be quite costly, in terms of time and computer processing.
Because atmospheric computations are performed once for every surface shading point, the cost of the atmospheric computations is tied to the surface shading rate. This means that even simple, low-detail fog, or other volumetric effect, can be very expensive if scene geometry is complex. Often the desired effects cannot be achieved in the timeframe needed and for reasonable requirements of cost and time. Thus, there is a need in the prior art to have an improved method for rendering atmospheric effects in computer graphics and animation.
SUMMARY OF THE INVENTION
The present invention is directed to methods for rendering participating media effects. This invention reduces the computational cost of volumetric effects such as fog, smoke and other volumetric effects as implemented in many high-quality renderers. It reduces the number of expensive line integrals that must be computed by caching a small set of integral solutions and determining new integrals from filtering of the cached ones. More generally, this invention provides a way to sample volumetric effects at a rate based on the nature of the atmospheric effect, rather than at a rate determined by the underlying rendering algorithm.
A method for rendering participating media effects is disclosed in one embodiment of the present invention. At least one object having a surface is defined and a lattice aligned with a camera is also defined that encompasses the at least one object. Volumetric line integrals are computed from the camera to lattice points in a neighborhood of a particular point on the surface of the object to obtain a set of values. The obtained set of values is filtered to obtain a volumetric line integral value for the particular point on the surface. Additionally, the set of values may be cached in memory and may be used in computing additional volumetric line integrals when applicable. The set of values determined for lattice points is filtered to determine the volumetric line integral value for a particular point on the surface.
The method for rendering volumetric effects is repeated for additional particular points until the volumetric effects are rendered for all selected points on the surface. The scale of the lattice is dependent on a level of detail required for the participating media effects. The method may be used to approximate volumetric effects or atmospheric effects between the camera and the at least one object.
Another embodiment of the present invention is directed to an apparatus for rendering volumetric effects in computer graphics. The apparatus includes means for providing at least one object having a surface and means for defining a lattice aligned with a camera encompassing the at least one object. Means for computing a volumetric line integral from the camera to lattice points in a neighborhood of a particular point on the surface of the object is used to obtain a set of values and means for filtering the set of values is used to obtain a volumetric line integral value for the particular point on the surface.
In another embodiment of the present invention, a computer program product is disclosed. A computer readable medium has a computer program code embodied therein for rendering volumetric effects, the program code configured to cause a processor to provide at least one object having a surface and define a lattice aligned with a camera encompassing the at least one object. The processor also computes a volumetric line integral from the camera to lattice points in a neighborhood of a particular point on the surface of the object to obtain a set of values. The processor then filters the set of values to obtain a volumetric line integral value for the particular point on the surface.
The above and other objects, features and advantages of the invention will become apparent from the following description of the preferred embodiment taken in conjunction with the accompanying drawings.


REFERENCES:
Sabella “A Rendering Algorithm for Visualizing 3D Scalar Fields” ACM 1988.*
Kajiya et al. “Ray Tracing Volume Densities” ACM 1984.*
Molnar et al. “PixelFlow: High-Speed Rendering Using Image Composition” ACM 1992.*
Cook et al. “The Reyes Image Rendering Architecture” ACM 1987.*
Duff “Compositing 3-D Rendered Images” ACM 1985.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Incremental frustum-cache acceleration of line integrals for... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Incremental frustum-cache acceleration of line integrals for..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Incremental frustum-cache acceleration of line integrals for... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3233795

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.