Bucket-sorting graphical rendering apparatus and method

Computer graphics processing and selective visual display system – Computer graphics display memory system – Memory allocation

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C345S581000, C345S591000, C345S686000

Reexamination Certificate

active

06828978

ABSTRACT:

BACKGROUND OF THE INVENTION
1. The Field of the Invention
The present invention relates generally to graphical rendering devices and systems. Specifically, the invention relates to devices and systems for conducting highly realistic three-dimensional graphical renderings.
2. The Relevant Art
Graphical rendering involves the conversion of one or more object descriptions to a set of pixels that are displayed on an output device such as a video display or image printer. Object descriptions are generally mathematical representations that model or represent the shape and surface characteristics of the displayed objects. Graphical object descriptions may be created by sampling real world objects and/or by creating computer-generated objects using various editors.
In geometric terms, rendering requires representing or capturing the details of graphical objects from the viewer's perspective to create a two-dimensional scene or projection representing the viewer's perspective in three-dimensional space. The two-dimensional rendering facilitates viewing the scene on a display device or means such as a video monitor or printed page.
A primary objective of object modeling and graphical rendering is realism, i.e., a visually realistic representation that is life-like. Many factors impact realism, including surface detail, lighting effects, display resolution, display rate, and the like. Due to the complexity of real-world scenes, graphical rendering systems are known to have an insatiable thirst for processing power and data throughput. Currently available rendering systems lack the performance necessary to make photo-realistic renderings in real-time.
To increase rendering quality and reduce storage requirements, surface details are often separated from the object shape and are mapped onto the surfaces of the object during rendering. The object descriptions including surface details are typically stored digitally within a computer memory or storage medium and referenced when needed.
One common method of representing three-dimensional objects involves combining simple graphical objects into a more realistic composite model or object. The simple graphical objects, from which composite objects are built, are often referred to as primitives. Examples of primitives include triangles, surface patches such as bezier patches, and voxels.
Voxels are volume elements, typically cubic in shape, that represent a finite, three-dimensional space similar to bitmaps in two-dimensional space. Three-dimensional objects may be represented using a primitive comprising a three-dimensional array of voxels. A voxel object is created by assigning a color and a surface normal to certain voxel locations within the voxel array while marking other locations as transparent.
Voxel objects reduce the geometry bandwidth and processing requirements associated with rendering. For example, objects represented with voxels typically have smaller geometry transform requirements than similar objects constructed from triangles. Despite this advantage, existing voxel rendering algorithms are typically complex and extremely hardware intensive. A fast algorithm for rendering voxel objects with low hardware requirements would reduce the geometry processing and geometry bandwidth requirements of rendering by allowing certain objects to be represented by voxel objectss instead of many small triangles.
As mentioned, rendering involves creating a two-dimensional projection representing the viewer's perspective in a three-dimensional space. One common method of creating a two-dimensional projection involves performing a geometric transform on the primitives that comprise the various graphical objects within a scene. Performing a geometric transform changes any coordinates representing objects from an abstract space known as a world space into actual device coordinates such as screen coordinates.
After a primitive such as a triangle has been transformed to a device coordinate system, pixels are generated for each pixel location which is covered by that primitive. The process of converting graphical objects to pixels is sometimes referred to as rasterization or pixelization. Texture information may be accessed in conjunction with pixelization to determine the color of each of the pixels. Because more than one primitive may be covering any given location, a z-depth for each pixel generated is also calculated, and is used to determine which pixels are visible to the viewer.
FIGS. 1
a
and
1
b
depict a simplified example of graphical rendering. Referring to
FIG. 1
a
, a graphical object
100
may be rendered by sampling attributes such as object color, texture, and reflectivity at discrete points on the object. The sampled points correspond to device-oriented regions, typically round or rectangular in shape, known as pixels
102
. The distance between the sampled points is referred to herein as a sampling interval
104
. The sampled attributes, along with surface orientation (i.e. a surface normal), are used to compute a rendered color
108
for each pixel
102
. The rendered colors
108
of the pixels
102
preferably represent what a perspective viewer
106
would see from a particular distance and orientation relative to the graphical object
100
.
As mentioned, the attributes collected by sampling the graphical object
100
are used to compute the rendered color
108
for each pixel
102
. The rendered color
108
differs from the object color due to shading, lighting, and other effects that change what is seen from the perspective of the viewer
106
. The rendered color
108
may also be constrained by the selected rendering device. The rendered color may be represented by a set of numbers
110
designating the intensity of each of the component colors of the selected rendering device, such as red, green, and blue on a video display or cyan, magenta, yellow, and black on an inkjet printer.
As the graphical object
100
is rendered with each frame, the positioning and spacing of the discreet sampling points (i.e., the pixels
102
) projected onto the graphical object
100
determine what is seen by the perspective viewer
106
. One method of rendering, referred to as ray tracing, involves determining the position of the discreet sampling points by extending a grid
111
of rays
112
from a focal point
114
to find the closest primitive each ray intersects. Since the rays
112
are diverging, the spacing between the rays
112
, and therefore the size of the grid
111
, increases with increasing distance. Ray tracing, while precise and accurate, is generally not used in real-time rendering systems due to the computational complexity of currently available ray tracing algorithms.
The grid
111
, depicted in
FIG. 1
a
, is a set of regularly spaced points corresponding to the pixels
102
. The points of the grid
111
lie in an image plane perpendicular to a ray axis
115
. The distance of each pixel
102
from a reference plane perpendicular to the ray axis
115
, such as the grid
111
, is known as the pixel depth or z-depth. The distance or depth of the graphical object
100
changes the level of detail seen by the perspective viewer
106
. Relatively distant objects cover a smaller rendering area on the display device, resulting in a reduced number of rays
112
that reach the graphical object
100
, and an increased sampling interval
104
.
Visual artifacts occur when the spacing between the rays
112
result in the sampling interval
104
being too large to faithfully capture the details of the graphical object
100
. A number of methods have been developed to eliminate visual artifacts related to large sampling intervals. One method, known as super-sampling, involves rendering the scene at a higher resolution than the resolution used by the output device, followed by a smoothing or averaging operation to combine multiple rendered pixels into a single output pixel.
Another method, developed to represent objects at various distances and sampling intervals faithfully, involves creating multiple models of a given objec

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Bucket-sorting graphical rendering apparatus and method does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Bucket-sorting graphical rendering apparatus and method, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Bucket-sorting graphical rendering apparatus and method will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3281207

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.