Computer graphics processing and selective visual display system – Computer graphics processing – Three-dimension
Reexamination Certificate
2001-04-19
2004-02-03
Luu, Matthew (Department: 2672)
Computer graphics processing and selective visual display system
Computer graphics processing
Three-dimension
C345S426000
Reexamination Certificate
active
06686915
ABSTRACT:
BACKGROUND OF THE INVENTION
1. The Field of the Invention
The present invention relates to systems and methods for rendering visual effects that are a function of depth. More specifically, the present invention is directed to systems and methods for accurately and realistically rendering visual effects such as fog, colored liquids, gels, smoke, mists, and the like for which the visual appearance of the effects can change with respect to depth and for which the effects are rendered on an output device so as to be generally contained.
2. Background and Related Art
Computer systems exist that are capable of generating and displaying three-dimensional objects through the use of an output device, such as a monitor or printer. The output device provides a two-dimensional space on which the three-dimensional objects are displayed. The three-dimensional objects are created using an X-Y scale to provide the objects with the properties of width and height, and an imaginary Z-axis to further provide the property of depth. On the X-Y-Z scale, a rendering pipeline models the objects through the use of primitives or polygons that provide the overall structure for the objects and upon which textures, such as color and shade, may be applied.
Conventional rendering techniques add visual effects to computer graphics to provide realism. The visual effects may utilize depth by layering an opaque object in such a way as to partially hide another object that is behind the opaque object so as to give the appearance of depth. The visual effects may also utilize depth by rendering an opaque object through a partially transparent or translucent item, such as when fog or mist is rendered to surround an opaque object.
To render a partially transparent or translucent visual effect, such as fog, mist, haze, colored liquid, etc., conventional rendering techniques utilize a linear or exponential algorithm that renders visibility as a function of depth. By way of example, and with reference to
FIG. 1A
, a visibility curve
10
is illustrated that corresponds to an algorithm for rendering a partially transparent or translucent visual effect. The curve divides the visibility that may be rendered due to a partially transparent or translucent visual effect into three visibility regions. The visibility regions include transparent region
11
a
, partially transparent region
11
b
, and opaque region
11
c
, which model the way that objects are perceived in the real world.
When a partially transparent or translucent visual effect is to be rendered on an output device so as to appear to surround an opaque object, the viewpoint, or camera,
12
is placed at the vertical axis (i.e., depth=0), and the opaque object
13
is placed at some depth along the horizontal depth axis. The rendering of the visual effect is performed by projecting a point, such as a front primitive point of opaque object
13
onto the visibility curve
10
to obtain a visibility value (i.e. 0.4), which indicates the amount of the transparent or translucent effect that is to be blended to the front primitive point. This process is performed for each of the pixels in order to render the desired effect, as illustrated in
FIG. 1A
as object
13
b
of display screen
15
b
. As illustrated, when the visibility value is between 0.0 (completely opaque) and 1.0 (completely transparent) the object is rendered as being partially transparent, as illustrated by object
13
b
. The surrounding background
14
b
is rendered to have a greater depth value than object
13
b
and thus the background
14
b
is illustrated as being completely opaque Therefore, the foregoing process partially obscures opaque object
13
b
when the opaque object is positioned in the partially transparent region
11
b.
Alternatively, if the opaque object were placed within the transparent region
11
a
, the points projected onto visibility curve
10
would obtain a visibility value of 1.0 for all of the projected points of the object, indicating that there is full visibility of the opaque object. As such, an output device would render the opaque object without rendering any of the visual effect. This is illustrated in
FIG. 1A
as opaque object
13
a
that is rendered on display screen
15
a
. However, since the surrounding background
14
a
is rendered to have a greater depth value than object
13
a
, background
14
a
is illustrated as being completely opaque. Thus, the foregoing process does not obscure an object
13
a
that is positioned in the transparent region
11
a.
If the opaque object were placed within the opaque region
11
c
, the points projected onto visibility curve
10
would yield a visibility value of 0.0 for all of the projected points of the object, indicating that there is no visibility of the opaque object. As such, the visibility of the opaque object would be completely obscured. An output device would render the visual effect in a maximum/opaque manner, thereby preventing the rendering of the opaque object. This is illustrated in
FIG. 1A
as display screen
15
c
, which includes background
14
c
that is also opaque due to depth. Therefore, the foregoing process completely obscures an object that is positioned in the opaque region
11
c.
Thus, as illustrated in
FIG. 1A
, the extent to which a partially transparent or translucent visual effect is rendered varies with depth. The linear or exponential algorithm applies the effect to the primitives having a depth value between where the effect is to start and where effect is to end. No effect is applied to primitives having a depth value less than where the effect is to start in order to yield full visibility. Similarly, a maximum effect is applied to primitives having a depth value greater than where the effect is to end in order to completely obscure visibility.
While conventional techniques adequately render the desired perception where the visual effect linearly or exponentially exists in the entire field of view, existing techniques have not been able to accurately render the situation where the visual effect is generally contained to a specific area. Such situations arise when the visual effect is present, for example, within a translucent object, such as when fog or colored liquid is to be rendered inside an enclosed glass bottle.
By way of example, and with reference to
FIG. 1B
, if an opaque object
13
in a translucent bottle
18
, which is filled with colored liquid (not shown), were to be rendered using existing techniques, the following steps would occur. First, the primitive on the visible surface of the opaque object
13
would be applied to the frame buffer under the assumption that the colored liquid exists between the primitive and the viewpoint
17
. Next, a primitive at the front surface of the translucent bottle
18
would be applied to the same pixel of the frame buffer.
This process generates an excess contribution of the translucent visual effect (the colored liquid) that is applied to the primitive on the visible surface of object
13
since there is no colored liquid that is to be rendered between the front of the translucent bottle
18
and the viewpoint
17
. Furthermore, since the primitive of the front surface of the opaque object has already been blended into the pixel value stored in the frame buffer, there is no convenient way of subtracting the excess contribution of the translucent effect.
Furthermore, this problem cannot be fully solved by subtracting the depth value of the opaque object from the depth value of the translucent bottle prior to generating the translucent effect value, since the translucent effect is generally not linear between the viewpoint
17
and the object
13
. By way of example, and with reference to
FIG. 1B
, the depth value of the front surface of opaque object
13
is a known value and is illustrated as “DEPTH
1
.” Similarly, the depth value of the front surface of the translucent bottle
18
is a known value and is illustrated as “DEPTH
2
.” The difference (&Dgr;D) between the depth value of the front surface of the opaque object
13
and th
Luu Matthew
Webtv Networks, Inc.
Workman Nydegger
LandOfFree
Systems and methods for rendering visual effects that are a... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Systems and methods for rendering visual effects that are a..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Systems and methods for rendering visual effects that are a... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3329324