Fog simulation for partially transparent objects

Computer graphics processing and selective visual display system – Computer graphics processing – Three-dimension

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C345S440000, C345S440000

Reexamination Certificate

active

06184891

ABSTRACT:

FIELD OF THE INVENTION
The invention relates to a technique used in three-dimensional (3D) graphics rendering to simulate fog.
BACKGROUND OF THE INVENTION
In 3D graphics rendering, fog simulation is a rendering technique that can be used to simulate atmospheric effects such as haze, fog and smog. Fog is a general term that encompasses a variety of atmospheric effects such as haze, mist, smoke or pollution. Some computer-generated images tend to appear unrealistically sharp and well defined. Fog can make a computer-generated image more natural by making objects fade into the distance. When fog is simulated in a graphics scene, the objects gnat are farther from the viewpoint start to fade into the color of the fog.
In conventional graphics rendering architectures, like OpenGL, the author of a graphics scene can control the density of the fog as well as the fog color. The density of the fog is controlled by a parameter called the fog blend factor or fog dissolve factor, f. The fog color is typically represented as a vector value such as F=[F
red
, F
green
, F
blue
, 1]. The color F is expressed in conventional vector algebraic notation, including four components: red, green, blue and alpha. Alpha is the opacity of the pixel, and typically ranges from 0 (totally transparent) to 1 (totally opaque). Note that the alpha value of the fog is set to 1, which means that the fog is simulated as being totally opaque. The extent to which an object appears to fade into the fog depends on the fog factor, and specifically, how much fog exists between the viewpoint and the object. Both the fog color and the fog factor are terms in a fog model used to compute the value of a fogged pixel given the color and opacity of an input pixel, such as A=[A
red
, A
green
, A
blue
, A
&agr;
].
The fog factor, ƒ, represents an amount of fog. One way to describe the amount of fog at a given location in a graphics scene is to define it as the fog influence between the viewpoint of the graphics scene and the depth of an object relative to the viewpoint. More specifically, the fog factor is typically calculated as a function of z, the depth of an object from the viewpoint. For example one expression for ƒ is:
ƒ=
1−
e
−&tgr;Z
A
,
where &tgr; is the optical depth and Z
A
is the depth of the object from the viewpoint. The value of ƒ can be computed by a variety of other functions in the OpenGL graphics programming language from Silicon Graphics, and the Direct3D graphics programming interface from Microsoft. It is important to note that the notation for the dissolve factor is slightly different in some graphics programming languages. For example, in OpenGL, the value of ƒ in the above equation actually corresponds to (1−ƒ) in OpenGL rotation.
FIG. 1
is a block diagram illustrating how fog is typically applied to pixels in a conventional 3D graphics rendering pipeline. The rasterizer
20
is a stage in the graphics rendering pipeline where a geometric primitive used to model the surface of an object is converted into pixel values. In a geometric processing stage, the objects in a scene are clipped to a view volume and geometrically transformed to a view space, corresponding to a display screen. The rasterizer
20
takes the transformed geometric primitives (e.g., polygons) as input and computes the color of each pixel within the polygon. Typically, conventional rasterizers interpolate color values at a polygon's vertices to compute the color values at a pixel location within the polygon. The fog applicator
22
then modifies the color values of a pixel by applying an amount of fog of a predetermined color to interpolated color values. The blend unit
24
is responsible for blending pixels from the rasterizer and fog applicator with pixel values at corresponding pixel locations in the frame buffer
26
. The frame buffer is memory that stores an array of pixel values corresponding to the picture elements in a display device. When the graphics rendering pipeline completes the rendering of a graphics scene, the frame buffer has an array of pixels representing an output image for display on a display screen.
During rendering, the rasterizer
20
processes a stream of geometric primitives from the objects in a graphics scene. In some cases, geometric primitives can overlap the same pixel location. For example, a graphical object representing a foreground object can have polygons that occlude polygons of a background object. Graphics rendering systems employ a method called hidden surface removal to determine which surfaces are actually visible in a scene. One technique for hidden surface removal is to rasterize unsorted polygons into pixels with depth values. The blend unit
24
then determines whether an input pixel generated by the rasterizer occludes a previously generated pixel in the frame buffer at the same pixel location. If it does, the blend unit
24
replaces the pixel in the frame buffer with the new pixel. If it does not, the blend unit
24
discards the new pixel. An alternative technique is to sort the polygons in depth order and rasterize them in front to back order.
The process of computing pixel values in the frame buffer gets more complicated for partially transparent pixels since it is often necessary to combine the frontmost opaque pixel with partially transparent pixels in front of it. In some architectures that support partially transparent pixels, the blend unit
24
includes logic to combine partially transparent pixel values (sometimes called pixel fragments) at a pixel location into a final output pixel. Some architectures also rasterize geometric primitives at a subpixel resolution and then blend the pixel values of the subpixels in the neighborhood of each pixel location to compute final pixel values at each pixel location.
As illustrated in
FIG. 1
, the fog applicator
22
receives input pixels from the rasterizer and modifies them to simulate fog. The manner in which the fog applicator modifies a pixel (or pixel fragment) to simulate fog depends on the fog model or models implemented in it.
The conventional model for fogging a pixel A by the amount ƒ of fog color F is:
ƒF+(1−ƒ)A,
where A is the color of the pixel being fogged.
The conventional formula for fog simulation will generate the wrong pixel color if two fogged surfaces overlap on the screen
and
the frontmost surface is partially transparent. A fogged surface refers to a rendering of the surface of a 3D object into pixel values where the pixel values are modified due to the influence of fog on the object's surface.
There are two primary cases where a surface is partially transparent: 1) some of the geometric primitives ( e.g., polygons) used to model the surface of an object may only partially cover a pixel location in screen space; and 2) some of the geometric primitives used to model the surface of an object may be translucent. In both cases, the pixels generated by the rasterizer might have an opacity value (alpha) indicating that the pixel is not opaque.
The best way to illustrate the problem with the conventional approach in these cases is to consider an example.
FIG. 2
is a diagram illustrating how the conventional fog model produces incorrect results when the two fogged pixels are combined.
FIG. 2
shows series of magnified square regions representing pixels A and B. The pixels are the points at the center of each square, and the square regions are at ±½ pixel spacing from the pixels. Pixels A and B each have a partially covered region (hatched regions
40
and
42
) and transparent regions (white areas
44
and
46
. Assume in this example that pixel A is closer to the viewpoint than B (Z
A
<Z
B
).
The fog is represented as a scattering of dots (e.g., 48) of color F and an amount ƒ(z) corresponding to the fog between the viewpoint and the depth value (z) of the pixel.
Using the conventional formula, the fogged pixels A and B (
50
,
52
) appear as shown in FIG.
2
. When the

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Fog simulation for partially transparent objects does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Fog simulation for partially transparent objects, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Fog simulation for partially transparent objects will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2572513

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.