Ray intersection reduction using directionally classified...

Computer graphics processing and selective visual display system – Computer graphics processing – Three-dimension

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C345S421000

Reexamination Certificate

active

06489955

ABSTRACT:

BACKGROUND OF THE INVENTION
This invention generally relates to computer graphics display systems. More specifically, the invention relates to using ray tracing and backface culling technique to reduce the number of polygon intersection tests required to test effectively a ray against a set of polygons.
Backface Culling
Backface culling is a method of reducing the number of polygons rendered by a scan converting rendering architecture. The basic premise is simple: If we assume that the polygons we render are planar and only visible from one side, then we can easily detect when a polygon is facing away from the camera and eliminate it from consideration. The end result is that the time and computational resources which would have been wasted rendering invisible polygons can be used more efficiently on visible polygons. Since most computer graphics databases consist of polygon meshes of convex objects, approximately half of the polygons are backfacing when viewed from a single perspective. Therefore, the use of this technique effectively doubles the number of polygons processed by a scan converting rendering architecture in a given amount of time.
The traditional technique for culling backfacing polygons involves computing the normal vector of the plane in which each polygon lies and computing the dot product of this normal vector with the view vector from the camera focal point to a point on the surface of the polygon. If the sign of the dot product is positive, then the polygon is facing away from the camera (backfacing) and as such can be culled.
If the operation is performed in “camera coordinates” in which the virtual camera's center of projection is the origin of the coordinate system and the virtual camera's view direction vector is equal to the positive-Z axis of the coordinate system, then the computation of the dot product reduces to a simple sign check of the Z component of the polygon's plane normal vector. If the sign of the Z component is positive, then the polygon is backfacing and can be culled, otherwise the polygon must be drawn.
Recent articles disclose procedures that attempt to improve the efficiency of the process of backface culling in a scan converting rendering architecture. These articles include “Fast Backface Culling Using Normal Masks,” Zhangh, Hansen and Hoff, ACM Interactive 3D Graphics Conference, 1997 (Zhangh, et al.), and “Hierarchical Back-Face Computation,” Kumar, Subodh et al., Proceedings of 7th Eurographics Workshop on Rendering, June 1996, pp. 231-240 (Kumar, et al).
Zhang, et al. transforms unit normal vectors from 3D Cartesian coordinates (x,y,z) into polar coordinates (theta, phi) with an implied rho of 1.0. These 2D coordinates are used to generate a one-bit address within a backfacing mask—a two dimensional grid of single bit elements each of which corresponds to a solid angle on the unit sphere and represent all the unit normal vectors oriented within that solid angle. Any given unit vector can be mapped to one and only one bit in the 2D mask array. All of the normals mapped to one of the bits are said to belong to a “normal cluster” represented by that particular bit.
Each time the camera changes orientation a backfacing mask is constructed by determining for each bit in the mask whether all of the normals lying within the cluster are backfacing. This determination is performed by computing the dot products between the camera and each of the normals at the four corners of the represented solid angle. If all of the dot products are positive, then the bit is set in the backfacing mask, indicating that all normals in the cluster would be backfacing. This process is repeated for each cluster in the backfacing mask. After the backfacing mask has been generated, the polygons can be processed in turn. Each polygon's normal vector is computed from the cross product of its first and last edge vectors and is mapped to a normal cluster on the backfacing mask. If the corresponding backfacing mask bit is set, then the polygon is culled, otherwise the polygon is rendered. The mask technique described in Zhang, et al. offers a linear improvement in performance (forty to eighty percent faster) over traditional dot product evaluation, but can not achieve more than a one hundred percent increase in speed due to the fact that each polygon must be fetched and tested.
An approach advocated in Kumar, et al. groups normal vectors into a hierarchical tree of clusters based on position and orientation of polygons and their normal vectors. Each cluster divides space into three regions—the front, the back, and a mixed region—using separation planes. If the camera view point lies in the front region of a cluster, then all the polygons in the cluster are front facing. If the camera view point lies in the back region of a cluster, then all the polygons in the cluster are back facing. If the camera view point lies in the mixed region of a cluster, then sub clusters within the cluster must be evaluated because some of the polygons are front facing while others are backfacing.
This technique tests each cluster as a whole against the camera position and direction vectors without requiring that each triangle be explicitly fetched. In addition, this algorithm attempts to make use of frame-to-frame coherence. This algorithm does not eliminate one hundred percent of the backfacing polygons, but it eliminates between sixty and one hundred percent of these polygons, depending upon the polygon database.
Because the technique described in Kumar, et al. does not require each triangle to be tested, it is said to be a sublinear algorithm and as such has the potential to achieve an increase in speed of greater than one hundred percent. In practice, the algorithm achieves an increase in speed of between thirty and seventy percent when employed in a scan converting rendering architecture. This is because this algorithm significantly limits other optimizations, such as state sorting and vertex sharing, which are of critical importance to a scan converting architecture.
Ray Tracing
Ray tracing, also referred to as ray casting, is a technique employed in the field of computer graphics for determining what is visible from a vantage point along a particular line of sight. It was first reported as a technique for generating images and was first reported in “Some Techniques for Shading Machine Renderings of Solids”, Appel, AFIPS 1968 Spring Joint Computer Conference, 32, 37-45 (1968) (Appel). Many improvements have been published including support for reflections and shadows, soft shadows and motion blur, and indirect illumination and caustics. These improvements are discussed in “An Improved Illumination Model for Shaded Display,” Whitted, Communications of the ACM, Volume 23, Number 6, June 1980 (Whitted); “Distributed Ray Tracing,” Cook, Porter and Carpenter, Computer Graphics 18(3), July 1984, pp. 137-145 (Cook et al.); and “The Rendering Equation,” (Kajiya) Computer Graphics 20(4), August 1986, pp. 269 (Kajiya).
Ray tracing has also been used to compute form factors for iterative thermal transfer and radiosity computations [Wallace89]. Ray tracing is the most sophisticated visibility technique in the field of computer graphics, but it is also the most computationally expensive.
A ray is a half line of infinite length originating at a point in space described by a position vector which travels from said point along a direction vector. Ray tracing is used in computer graphics to determine visibility by directing one or more rays from a vantage point described by the ray's position vector along a line of sight described by the ray's direction vector. To determine the location of the nearest visible surface along that line of sight requires that the ray be effectively tested for intersection against all the geometry within the virtual scene and retain the nearest intersection.
An alternative to scan conversion for rendering an image involves directing one or more eye rays through each pixel in the image from the center of projection or points on

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Ray intersection reduction using directionally classified... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Ray intersection reduction using directionally classified..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Ray intersection reduction using directionally classified... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2942775

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.