Computer graphics processing and selective visual display system – Computer graphics processing – Three-dimension
Reexamination Certificate
2000-05-31
2003-03-11
Vo, Cliff N. (Department: 2671)
Computer graphics processing and selective visual display system
Computer graphics processing
Three-dimension
Reexamination Certificate
active
06532013
ABSTRACT:
RELATED APPLICATION(S)
This application is related to a co-pending application entitled “GRAPHICS PIPELINE INCLUDING COMBINER STAGES” filed Mar. 22, 1999 naming David B. Kirk, Matthew Papakipos, Shaun Ho, Walter Donovan, and Curtis Priem as inventors, and issued as U.S. Pat. No. 6,333,744, and which is incorporated herein by reference in its entirety.
1. Field of the Invention
The present invention relates to computer graphics, and more particularly to texture sampling in a computer graphics processing pipeline.
2. Background of the Invention
Recent advances in computer performance have enabled graphic systems to provide more realistic graphical images using personal computers and home video game computers. In such graphic systems, some procedure must be implemented to “render” or draw graphic primitives to the screen of the system. A “graphic primitive” is a basic component of a graphic picture, such as a polygon, e.g., a triangle, or a vector. All graphic pictures are formed with combinations of these graphic primitives. Many procedures may be utilized to perform graphic primitive rendering.
Early graphic systems displayed images representing objects having extremely smooth surfaces. That is, textures, bumps, scratches, or other surface features were not modeled. In order to improve the quality of the image, texture mapping was developed to model the complexity of real world surface images. In general, texture mapping is the mapping of an image or a function onto a surface in three dimensions. Texture mapping is a relatively efficient technique for creating the appearance of a complex image without the tedium and the high computational cost of rendering the actual three dimensional detail that might be found on a surface of an object.
Prior Art
FIG. 1
illustrates a graphics pipeline with which texture mapping may be performed. As shown, included is a transform engine
100
, a set-up module
102
, a rasterizer
104
, a texture math module
106
, a level of detail (LOD) calculator
108
, a texture fetch module
110
, a texture filter
112
, and a texture combination engine
114
. It should be noted that the transform engine
100
and set-up module
102
need not necessarily be required in the graphics pipeline of a graphics integrated circuit.
During operation, the transform engine
100
may be used to perform scaling, rotation, and projection of a set of three dimensional vertices from their local or model coordinates to the two dimensional window that will be used to display the rendered object. The setup module
102
utilizes the world space coordinates provided for each triangle to determine the two dimensional coordinates at which those vertices are to appear on the two dimensional window. Prior Art
FIG. 2
illustrates the coordinates
200
of the vertices
201
which define a triangle
202
. If the vertices
201
of the triangle
202
are known in screen space, the pixel positions vary along scan lines within the triangle
202
in screen space and may be determined.
The setup module
102
and the rasterizer module
104
together use the three dimensional world coordinates to determine the position of each pixel contained inside each of the triangles. Prior Art
FIG. 3
illustrates a plurality of pixels
300
identified within the triangle
202
in such a manner. The color values of the pixels in the triangle
202
vary from pixel to pixel in world space. During use, the setup module
102
and the rasterizer module
104
generate interpolated colors, depth and texture coordinates.
The setup module
102
and the rasterizer module
104
then feed the pixel texture coordinates to the texture math module
106
to determine the appropriate texture map colors. In particular, texture coordinates are generated that reference a texture map using texture coordinate interpolation which is commonly known to those of ordinary skill in the art. This is done for each of the pixels
300
identified in the triangle
202
. Prior Art
FIG. 3
illustrates texture coordinates
302
for the pixels
300
identified within the triangle
202
.
Next, a LOD calculation is performed by the LOD calculator
108
. Occasionally during rendering, one texel, or texture element, will correspond directly to a single pixel that is displayed on a monitor. In this situation the level of detail (LOD) is defined to be equal to zero (
0
) and the texel is neither magnified nor minified. However, the displayed image can be a magnified or minified representation of the texture map. If the texture map is magnified, multiple pixels will represent a single texel. A magnified texture map corresponds to a negative LOD value. If the texture map is minified, a single pixel represents multiple texels. A minified texture map corresponds to a positive LOD value. In general, the LOD value corresponds to the number of texels in the texture map ‘covered’ by a single pixel.
The amount of detail stored in different LOD representations may be appreciated by drawing an analogy to the detail perceived by an observer while observing a texture map. For example, very little detail may be perceived by an observer while watching an automobile from a distance. On the other hand, several details such as doors, windows, mirrors will be perceived if the observer is sufficiently close to the automobile. A finer level LOD will include such additional details, and a courser LOD will not.
Once the appropriate level of detail of the texture map is selected based on the calculated LOD value, the texture coordinates generated by the texture math module
106
are used to fetch the appropriate texture map colors using the texture fetch module
110
. These texture map colors are then filtered by the texture filter module
112
. The combiner engine
114
combines together the various colors and textures fetched by the texture fetch module
110
and filtered by the texture filter module
112
.
It is important to note that the pipeline described hereinabove has a linear topology. These and other simplistic non-linear pipelines only enable one texture fetch and texture calculation per rendering pass. This is a limited design that is static in nature. There is thus a need for a pipeline that allows for more dynamic texture fetches and shading calculations, and in particular, the ability for feeding filter results back to influence subsequent texture address calculations.
DISCLOSURE OF THE INVENTION
A system, method and article of manufacture are provided for interweaving shading calculations and texture retrieval operations during texture sampling in a graphics pipeline. First, a shading calculation is performed in order to generate output, i.e. colors or texture coordinates. Next, texture information is retrieved, and another shading calculation is performed using the texture information in order to generate additional output. Texture information may be retrieved and shading calculations may then be repeated as desired. Thereafter, the generated output may be combined. As such, the repeated texture information retrieval and shading calculations may be carried out in an iterative, programmable manner.
In one embodiment of the present invention, edge distances of a primitive may be calculated, and at least one of the shading calculations involves the edge distances. Further, the shading calculation may include the calculation of a plurality of weighted coefficients from the edge distances. As an option, such weighted coefficients may include barycentric weights which use parameter values of the primitive to perform parameter interpolation.
In another embodiment of the present invention, the texture information may include filtered texture color information. As an option, the filtered texture value may be used as texture coordinates for use in retrieving further texture information when the texture information retrieval is repeated. Further, the repeated shading calculation may also use the output in order to generate additional output.
In still another embodiment of the present invention, the output may include diffuse output colors, fog output values,
Feldman Zatz Harold Robert
Kirk David B.
Papakipos Matthew N.
Peng Liang
NVIDIA Corporation
Silicon Valley IP Group, LLC
Vo Cliff N.
Zilka Kevin J.
LandOfFree
System, method and article of manufacture for pixel shaders... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with System, method and article of manufacture for pixel shaders..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and System, method and article of manufacture for pixel shaders... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3026754