Computer graphics processing and selective visual display system – Computer graphics processing – Three-dimension
Reexamination Certificate
2001-04-20
2004-01-13
Zimmerman, Mark (Department: 2671)
Computer graphics processing and selective visual display system
Computer graphics processing
Three-dimension
C345S419000
Reexamination Certificate
active
06677945
ABSTRACT:
The present invention relates to computer graphics systems, and more specifically, to computer graphics systems that render primitives utilizing at least one frame buffer and depth buffer.
BACKGROUND OF THE INVENTION
Rendering of three-dimensional scenes requires a realistic representation of multiple objects in the field of view. Depending on the distance of each object from a given point of view (also known in 3D graphics as camera position), objects may occlude or be occluded by other objects. Even in the case of a single object, some of its parts may occlude or be occluded by other parts. Methods and apparatus used to resolve occlusions and eliminate hidden surfaces play an important role in creating realistic images of three-dimensional scenes.
To work effectively, methods of eliminating hidden surfaces have to utilize a depth resolution that is better than the minimal distance between the occluding object and the occluded object in the scene. Such methods also have to be simple enough to be implemented in low-cost graphics hardware that accelerates three-dimensional rendering, or in software-only rendering in cases where a hardware accelerator is not available.
Most popular algorithms for hidden surface elimination utilize depth buffers, or Z-buffers. Typically, a pixel represented on a screen at a two-dimensional location X,Y is also associated with a particular depth value, Z. This value is usually compared with a depth value stored in a special buffer at the location corresponding to the same X,Y coordinate set. Visibility tests compare new and stored depth values. If these tests pass, the depth value in the depth buffer can be updated accordingly.
Values of X,Y,Z are generally computed per each vertex of a triangle by transforming three-dimensional vertex coordinates from the view space (regular three-dimensional space with the origin of the coordinates aligned with the camera position) to the screen space (a three-dimensional space with the X,Y plane being parallel to the screen and distorted as a result of perspective projection). During this transformation the actual depth of the object in the camera field of view (Z
v
) is mapped to the depth (Z
s
) in the screen space. After values of Z
s
are computed for every vertex of the triangle they are linearly interpolated for every pixel during triangle rasterization. Interpolation results are compared with Z
s
values stored in the Z-buffer at corresponding locations to test visibility of the current pixel.
The most popular form of mapping between Z
v
and Z
s
, which is supported by practically all hardware graphics accelerators, is known as “screen Z-buffer” and is described in detail by W. M. Newman and R. F. Sproull in
Principles of Interactive Computer Graphics
, published by McGraw-Hill in 1981 and incorporated herein by reference. Newman and Sproull describe the following definition for mapping between Z
v
and Z
s
as follows:
Z
s
=
Z
f
Z
f
-
Z
n
*
(
1
-
Z
n
Z
v
)
where Zf and Zd are distances from the camera to the far (Zf) and near (Zn) clipping planes that bound the view volume in the screen space.
Non-linear mapping between Z
v
and Z
s
makes Z
s
less sensitive to changes in Z
v
that occur close to the far end of the view volume than to changes in Z
v
that occur close to the near end of the view volume. For example, in an instance where the ratio of distances to the far and near planes is equal to 100, a small change in Z
v
close to the near plane causes a larger change in Z
s
, for example, by a factor of 10,000 or so, than the same amount of change in Z
v
close to the far plane. For example, a flight simulator application may have a range of visual distances from 0.1 mile for the closest point on the group or a plane in the same formation to a 1-mile distance to the mountains near the horizon. If the total resolution of the depth buffer is 16 bits (the Z
s
range from 0 to 65,325 (2
16
)), changing Z
s
from zero to one (i.e., close to the camera) corresponds to changing the object's distance by 0.95 inches, while changing Z
s
from 65,324 to 65,325 (i.e., far from the camera) corresponds to changing the object's distance by 797 feet. If the total resolution of the depth buffer is 24 bits, changes in the object's distance close and far from the camera are decreased by a factor of 256. However, increasing the depth buffer from 16 to 24 bits increases memory bandwidth required for depth buffer access.
The bandwidth required to access external buffers that store color and depth values is a scarce resource, limiting the performance of modern 3D graphics accelerators. Usually, bandwidth consumed by a depth buffer is significantly larger than that consumed by the color buffer. For instance, if 50% of the pixels are rejected after a visibility test, the depth buffer may need three times more bandwidth than the color buffer. The depth values are read for all the pixels, and are written for 50% of the pixels, while the color values are only written for 50% of the pixels.
Several conventional approaches have been developed to reduce limits controlling the balance between the resolution of the depth buffer (i.e., 16-bit/24-bit) and rendering performance. In one approach, the mapping between the distance from the camera Z
v
and the stored depth values is modified to increase effective depth resolution. The use of a different mapping equation than that described above can be accompanied by the use of a different storage format. For example, a depth buffer having increased resolution (for example, a 1/W buffer) is described in U.S. Pat. No. 6,046,746, entitled “Method and Apparatus Implementing High Resolution Rendition of Z-buffered Primitives”, and is incorporated herein by reference. Such a depth buffer stores values proportional to 1/Z
v
in the floating-point format. Another type of depth buffer (a complementary Z-buffer) is described in a paper written by E. Lapidous and G. Jiao, entitled “Quasi Linear Z-buffer”, published in 1999 in the Proceedings on the Conference of SIGGRAPH 99, and incorporated herein by reference. The complementary Z-buffer further increases depth buffer resolution for objects positioned far from the camera by storing values equivalent to 1−Z
s
in floating-point format. Yet another type of depth buffer (a W-buffer) is used by Microsoft Corporation as a reference for measuring quality of 3D graphics accelerators, and stores values proportional to Z
v
providing a constant depth precision across the view volume.
The utility of these depth buffers in 3D software applications is limited in that such depth buffers are optional components and may not be supported by the widely available graphics hardware. Developers desiring maximum exposure of their product(s) typically design 3D applications so that they can be rendered correctly using a de-facto standard supported by almost all 3D graphics accelerators, which to date is a 24-bit screen Z-buffer conforming with equation (1) above. Even if a different type of depth buffer is supported by a particular 3D hardware accelerator, the lack of applications that exploit improved precision limit the usefulness of this feature. Further, the use of the W-buffer as a reference also limits development of depth buffers having increased precision, since depth buffers with an increased precision compared with the W-buffer may produce a different image, and be treated as an error by the quality control software. Thus, 3D applications are designed to correctly render images using either the 24-bit W-buffer or the 24-bit screen Z-buffer, limiting the minimally required depth precision in each point of view volume to the least of the two precision values available at that point.
In another approach, data exchange used during access to the depth buffer is modified to decrease the required bandwidth. For instance, U.S. Pat. No. 5,844,571, entitled “Z-buffer Bandwidth Reductions Via Split Transactions”, which is incorporated herein by reference, describes Z-buffer bandwidth reductions via split transactions, where the least significant
Jiao Guofang
Lapidous Eugene
Zhang Jianbo
Albertt David L.
Gray Cary Ware & Freidenrich LLP
Sealey Lance W.
XGI Cayman, Ltd.
Zimmerman Mark
LandOfFree
Multi-resolution depth buffer does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Multi-resolution depth buffer, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Multi-resolution depth buffer will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3252036