System and method for producing an antialiased image using a...

Computer graphics processing and selective visual display system – Computer graphic processing system – Plural graphics processors

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C345S419000, C345S421000, C345S423000, C345S629000, C345S582000

Reexamination Certificate

active

06633297

ABSTRACT:

The present invention relates generally to computer graphics, and more particularly to a system and method for reducing memory and processing bandwidth requirements of a computer graphics system by using a buffer in a graphics pipeline to merge selected image fragments before they reach a frame buffer.
BACKGROUND OF THE INVENTION
Many computer graphics systems use pixels to define images. The pixels are arranged on a display screen, such as a raster display, as a rectangular array of points. Two-dimensional (2D) and three-dimensional (3D) scenes are drawn on the display by selecting the light intensity and the color of each of the display's pixels; such drawing is referred to as rendering.
Rendering a scene has many steps. One rendering step is rasterization. A scene is made up of objects. For example, in a scene of a kitchen, the objects include a refrigerator, counters, stove, etc. Rasterization is a process by which the following is determined for each object in the scene: (1) identifyng the subset of the display's pixels that are contained within the object, and then for each pixel in this subset, (2) identifying the information that is later used to determine the color and intensity to assign to each pixel. Rasterization of an object generates a fragment for each pixel the object either fully or partially covers, and the information identified in (2) above is called fragment data.
A scene may be composed of arbitrarily complex objects. Before rendering such a scene by a computer system, a process called tessellation decomposes the complex objects into simpler (primitive), planar objects. Typically, systems decompose the complex objects into triangles. For example, polygons with four or more vertices are decomposed into two or more triangles. Curved surfaces, such as on a sphere, are also approximated by a set of triangles. These triangles are then are then rasterized. Though with minor modifications the invention could work with primitives with more sides, for example, quadrilaterals, hereafter we assume that all surfaces are tessellated into triangles. “Primitives” with more sides will only arise as a consequence of merging fragments from two or more triangles.
In
FIG. 1
, a tessellated surface
30
has three primitive objects—triangle one
32
-
1
, triangle two
32
-
2
and triangle three
32
-
3
. The edges of the tessellated surface
30
are depicted with wide lines. To illustrate the rasterization process, the tessellated surface
30
is superimposed on an exemplary pixel grid
40
. Each pixel
42
of the pixel grid
40
is represented by a square. The rasterization process generates a fragment for each primitive object that is superimposed on a pixel
42
.
In the rasterization process, a finite array of discrete points, each point representing the center of a pixel of the display device, is used to construct a regular grid, for example the pixel grid
40
. To construct such a grid, a filter kernel is placed over each of the discrete points. The two-dimensional bounding shape of the portion of the filter that has non-zero weight is sometimes called the support in signal processing theory, but is commonly referred to as the footprint. In the general case, the filter footprints of neighboring pixels overlap each other and thus intersect. Typically, hardware-based rasterizers use filter footprints that are 1×1 pixel squares and thus do not overlap. Such a filter was used to create pixel grid
40
. Each square in pixel grid
40
is the filter footprint of a 1×1 pixel square filter placed over the discrete pixel point at the center of the square. This pixel grid
40
is used to generate fragments.
The fragments of an object are obtained by projecting the object onto the pixel grid. A fragment is then generated for a given pixel if the footprint of the filter located over the pixel intersects the object. To illustrate the rasterization process, rasterization of the three triangles
32
yields a number of fragments for each triangle
32
. Within each pixel
42
, the number enclosed by a circle is the number of fragments that are generated for that pixel on behalf of one or more primitive objects. For example, since tessellated surface
30
does not cover pixel
42
-
1
, no fragments are associated with pixel
42
-
1
. Since triangle
32
-
2
partially covers pixel
42
-
2
, one fragment
44
is associated with pixel
42
-
2
. Since all three triangles
32
-
1
,
32
-
2
and
32
-
3
partially cover pixel
42
-
3
, three fragments
46
are generated for pixel
42
-
3
. Because none of the three fragments
46
-
1
,
46
-
2
,
46
-
3
fully cover pixel
42
-
3
, pixel
42
-
3
is displayed with a color that is a combination of the three fragments
46
-
1
,
46
-
2
,
46
-
3
and the background color.
The grid
40
depicts the filter footprints obtained by locating a filter with a 1×1 pixel square footprint over each pixel center in the pixel grid. For example, square
48
in grid
40
represents the footprint of the filter that is centered over the point in the pixel grid that corresponds to pixel
50
. The color and intensity of a fragment is obtained by sampling the object's color and intensity at each point of intersection with the pixel's filter footprint, weighing each sample by the value of the filter at the corresponding point, and accumulating the results.
After rasterization, texture mapping is typically applied. Texture mapping is a technique for shading surfaces of objects with texture patterns, thereby increasing the realism of the scene being rendered. Texture mapping is applied to the fragments that correspond to objects for which texture mapping has been specified by the person who designed the scene. Texture mapping results in color information that is either combined with the existing color information for the fragment or replaces this data.
Once the color information is known for a fragment, the frame buffer is updated. In this step, each newly-generated fragment is either added to or blended with previously-generated fragments that correspond to the same pixel. The frame buffer stores up to N fragments per pixel, where N is greater than or equal to one. When a new fragment f is generated for a pixel P, the frame buffer replaces one of pixel P's existing fragments with the new fragment f, blends fragment f with one of the existing fragments, or stores fragment f with the existing fragments if fewer than N fragments are currently stored. In such systems, the displayed color of a pixel is obtained by blending together the new fragment f with up to N stored fragments.
Because rasterization of a scene typically yields many fragments for each pixel, the texture-mapping stage and frame buffer often process multiple fragments for the same pixel. In many cases, fragments from two or more adjoining triangles that cover the same pixel may have nearly identical color and depth values because the fragments belong to the same tessellated surface.
Artifacts are distortions in the displayed image. One source of artifacts is aliasing. Aliasing occurs because the pixels are sampled and therefore have a discrete nature. Artifacts can appear when an entire pixel is given a light intensity or color based upon an insufficient sample of points within that pixel. To reduce aliasing effects in images, the pixels can be sampled at subpixel locations within the pixel. Each of the subpixel sample locations contributes color data that can be used to generate the composite color of that pixel.
As shown in
FIG. 2
, the filter is typically evaluated at a predefined number of discrete points
56
within the footprint. Typically, from four to thirty-two sample points are used. In one approach to sampling, sparse supersampling, these points are “staggered” on a fine grid. For example, the filter for the pixel
50
is sampled at four points
56
, labeled S
1
, S
2
, S
3
, and S
4
, chosen from a 4×4 array
60
aligned to the center
62
of the pixel
50
. The term coverage mask refers to the data that records, for the sample points
56
assoc

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

System and method for producing an antialiased image using a... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with System and method for producing an antialiased image using a..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and System and method for producing an antialiased image using a... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3173138

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.