Volumetric three-dimensional fog rendering technique

Computer graphics processing and selective visual display system – Computer graphics processing – Three-dimension

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C345S440000

Reexamination Certificate

active

06268861

ABSTRACT:

FIELD OF THE INVENTION
The present invention relates generally to systems for computer graphics. More specifically, the present invention includes a method and apparatus for rendering three-dimensional fog effects in simulation environments, such as flight simulators.
BACKGROUND OF THE INVENTION
Fog effects are an important part of realistic simulation environments. As an example, flight simulators often use fog effects to simulate adverse weather conditions. This allows air crews to be safely familiarized with difficult operational scenarios, such as landing on fog-obscured runways. To be realistic, fog effects must closely model the appearance and behavior of real fog conditions. This means that fog effects must be capable of modeling patchy or otherwise non-uniform fog or haze. Realistic fog effects must also be capable of animation. This allows fog to swirl or move in a manner that mimics natural fog.
Computer systems (and related devices) typically create three-dimensional images using a sequence of stages known as a graphics pipeline. During early pipeline stages, images are modeled using a mosaic-like approach where each image is composed of a collection of individual points, lines and polygons. These points, lines and polygons are know as primitives and a single image may require thousands, or even millions, of primitives.
In the past, several techniques have been developed which can be used within the stages of a graphics pipeline to create fog effects. These techniques include color blending, texture mapping and volumetric rendering.
For color blending, a fog color is selected or defined. During the fogging process, the graphics pipeline blends the fog color into each of the pixels within the image being fogged. The graphics pipeline determines the amount of fog color to add to the image's pixels by calculating a weighting factor for each pixel. Each pixel's weighting factor is a linear or exponential function of the distance between the pixel and the eye-point. The graphics pipeline blends the predefined fog color into the each pixel in accordance with the pixel's weighting factor.
For texture mapping, a series of texture maps are generated. Each texture map is configured to make objects appear as if they are being viewed through a specific fog depth. Thus, a first texture map might make objects appear as if they were obscured by one meter of fog and a second texture map might make objects appear as if they were obscured by ten meters of fog. In some cases, the series of textures are encoded within a three-dimensional texture map. During the fogging process, the graphics pipeline selects the appropriate texture map for each primitive within the image being fogged. The selection of texture maps is based on the distance between the primitives and the eye-point. The pipeline then textures each primitive with the appropriate texture map.
For volumetric rendering, a three-dimensional volume is generated to model the desired fog effect. During the fogging process, the graphics pipeline applies the three-dimensional volume to the space between the primitives included in an image and the eye-point. Typically, this is accomplished through the use of a volumetric rendering technique such as ray-casting, voxel rendering, or three-dimensional texture-slice composition.
Color blending, texture mapping and volumetric rendering are all effective techniques for rendering uniform fog effects. Unfortunately, each of these techniques are subject to disadvantages that make them less than optimal for flight simulators and other simulation environments. For example, when color blending is used to create fog effects, the amount of fog color is controlled strictly by pixel distance. The means that fog effects produced by these techniques tend to be uniform (non-patchy) and constant over time (inanimate). Texture mapping and volumetric rendering each allow for the creation of non-uniform (patchy) fog effects. For this reason both of these techniques offer a higher degree of realism than is possible for color blending. Unfortunately, neither texture mapping nor volumetric rendering provide animated fog effects. Both of these techniques are also limited in other fashions. Texture mapping, for example, fails to provide a mechanism that allows the appearance of a fog effect to change as the eye-point moves in relation to an image. Volumetric rendering overcomes this difficulty but generally requires expensive hardware support within the graphics processor.
Thus, a need exists for a method for rendering fog effects that provides non-uniform, animated fog effects. The fog effects created must be accurately portrayed as the position of the eye-point changes within an image. These methods need to provide high-performance graphics throughput and be relatively inexpensive to implement. This need is especially important for simulation environments, such as flight simulators and for highly realistic virtual reality systems.
SUMMARY OF THE INVENTION
An embodiment of the present invention includes a method and apparatus for rendering fog effects. A representative environment for the present invention includes a computer system having at least one host processor and a graphics processor. The computer system includes a graphics pipeline. Initial pipeline stages within the graphics pipeline are performed by the host processor. Later pipeline stages are performed by the graphics processor.
For the purposes of the invention, it is assumed that each image to be rendered includes a volumetric fog definition. The volumetric fog definition defines the density of fog at all locations within the image. It is also assumed that the volumetric fog definition may change between frames. This corresponds to wind-blown, swirling or other animated fog effects.
To add fog effects to an image, the host processor calculates the position of the eye-point relative to the image to be drawn. Using the calculated position, the host processor generates a three-dimensional fog texture using the volumetric fog definition. The host processor performs this generation by volumetric rendering of the volumetric fog definition. The three-dimensional fog texture is then downloaded by the host processor to the graphics pipeline. Alternately, in cases where the host and graphics processors share a common memory, the stage of downloading may be eliminated.
With the three-dimensional texture in place, the graphics processor renders the primitives that make up the image. Typically, the rendering process will be accomplishing using one or more rendering passes through one or more stages within the graphics pipeline. As some time before or subsequent to the completion of the rendering process, the host processor supplies a blending function to the graphics processor. The blending function defines how the three dimensional texture will be applied to the image.
Once the host processor has supplied the blending function and the graphics processor has completed rendering the image, the graphics processor performs an additional texturing pass. The texturing pass applies the three-dimensional texture to the primitive that are included in the image being rendered.
The entire sequence of steps is then repeated, preferably on a frame-by-frame basis. Repetition ensures that the fog effect is accurately portrayed as the position of the eye-point changes relative to the image being rendered. Repetition also provides animated fog effects in cases where the volumetric fog definition of an image changes between frames. This allows the present invention to simulate swirling, wind-driven or other animated fog effects.
Advantages of the invention will be set forth, in part, in the description that follows and, in part, will be understood by those skilled in the art from the description herein. The advantages of the invention will be realized and attained by means of the elements and combinations particularly pointed out in the appended claims and equivalents.


REFERENCES:
patent: 6002406 (1999-12-01), Zhao
patent: 6016150 (2000-01-01), Lengyel et a

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Volumetric three-dimensional fog rendering technique does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Volumetric three-dimensional fog rendering technique, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Volumetric three-dimensional fog rendering technique will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2494441

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.