Computer graphics processing and selective visual display system – Computer graphics processing – Three-dimension
Reexamination Certificate
2001-10-12
2003-12-16
Zimmerman, Mark (Department: 2671)
Computer graphics processing and selective visual display system
Computer graphics processing
Three-dimension
C345S506000, C345S582000, C345S586000, C345S587000
Reexamination Certificate
active
06664963
ABSTRACT:
FIELD OF THE INVENTION
The present invention relates to computer graphics, and more particularly to texture sampling in a computer graphics processing pipeline.
BACKGROUND OF THE INVENTION
Recent advances in computer performance have enabled graphic systems to provide more realistic graphical images using personal computers and home video game computers. In such graphic systems, some procedure must be implemented to “render” or draw graphic primitives to the screen of the system. A “graphic primitive” is a basic component of a graphic picture, such as a polygon, e.g., a triangle, or a vector. All graphic pictures are formed with combinations of these graphic primitives. Many procedures may be utilized to perform graphic primitive rendering.
Early graphic systems displayed images representing objects having extremely smooth surfaces. That is, textures, bumps, scratches, or other surface features were not modeled. In order to improve the quality of the image, texture mapping was developed to model the complexity of real world surface images. In general, texture mapping is the mapping of an image or a function onto a surface in three dimensions. Texture mapping is a relatively efficient technique for creating the appearance of a complex image without the tedium and the high computational cost of rendering the actual three dimensional detail that might be found on a surface of an object.
Prior Art
FIG. 1
illustrates a graphics pipeline with which texture mapping may be performed. As shown, included is a transform engine
100
, a set-up module
102
, a rasterizer
104
, a texture math module
106
, a level of detail (LOD) calculator
108
, a texture fetch module
110
, a texture filter
112
, and a texture combination engine
114
. It should be noted that the transform engine
100
and set-up module
102
need not necessarily be required in the graphics pipeline of a graphics integrated circuit.
During operation, the transform engine
100
may be used to perform scaling, rotation, and projection of a set of three dimensional vertices from their local or model coordinates to the two dimensional window that will be used to display the rendered object. The setup module
102
utilizes the world space coordinates provided for each triangle to determine the two dimensional coordinates at which those vertices are to appear on the two dimensional window. Prior Art
FIG. 2
illustrates the coordinates
200
of the vertices
201
which define a triangle
202
. If the vertices
201
of the triangle
202
are known in screen space, the pixel positions vary along scan lines within the triangle
202
in screen space and may be determined.
The setup module
102
and the rasterizer module
104
together use the three dimensional world coordinates to determine the position of each pixel contained inside each of the triangles. Prior Art
FIG. 3
illustrates a plurality of pixels
300
identified within the triangle
202
in such a manner. The color values of the pixels in the triangle
202
vary from pixel to pixel in world space. During use, the setup module
102
and the rasterizer module
104
generate interpolated colors, depth and texture coordinates.
The setup module
102
and the rasterizer module
104
then feed the pixel texture coordinates to the texture math module
106
to determine the appropriate texture map colors. In particular, texture coordinates are generated that reference a texture map using texture coordinate interpolation which is commonly known to those of ordinary skill in the art. This is done for each of the pixels
300
identified in the triangle
202
. Prior Art
FIG. 3
illustrates texture coordinates
302
for the pixels
300
identified within the triangle
202
.
Next, a LOD calculation is performed by the LOD calculator
108
. Occasionally during rendering, one texel, or texture element, will correspond directly to a single pixel that is displayed on a monitor. In this situation the level of detail (LOD) is defined to be equal to zero (0) and the texel is neither magnified nor minified. However, the displayed image can be a magnified or minified representation of the texture map. If the texture map is magnified, multiple pixels will represent a single texel. A magnified texture map corresponds to a negative LOD value. If the texture map is minified, a single pixel represents multiple texels. A minified texture map corresponds to a positive LOD value. In general, the LOD value corresponds to the number of texels in the texture map ‘covered’ by a single pixel.
The amount of detail stored in different LOD representations may be appreciated by drawing an analogy to the detail perceived by an observer while observing a texture map. For example, very little detail may be perceived by an observer while watching an automobile from a distance. On the other hand, several details such as doors, windows, mirrors will be perceived if the observer is sufficiently close to the automobile. A finer level LOD will include such additional details, and a courser LOD will not.
Once the appropriate level of detail of the texture map is selected based on the calculated LOD value, the texture coordinates generated by the texture math module
106
are used to fetch the appropriate texture map colors using the texture fetch module
110
. These texture map colors are then filtered by the texture filter module
112
. The combiner engine
114
combines together the various colors and textures fetched by the texture fetch module
110
and filtered by the texture filter module
112
.
It is important to note that the pipeline described hereinabove has a linear topology. These and other simplistic non-linear pipelines only enable one texture fetch and texture calculation per rendering pass. This is a limited design that is static in nature. There is thus a need for a pipeline that allows for more dynamic texture fetches and shading calculations, and in particular, the ability for feeding filter results back to influence subsequent texture address calculations.
DISCLOSURE OF THE INVENTION
A system, method and computer program product are provided for performing shader calculations in a graphics pipeline. Initially, a shading calculation is performed in order to generate output. Thereafter, an additional shading calculation is carried out. Such additional shading calculation includes converting the output of the shading calculation into a floating point format. Further, a dot product is calculated utilizing the converted output and texture coordinates. The dot product is then clamped. Next, the clamped dot product is stored in a plurality of color components.
In one embodiment, the output may be converted into the floating point format utilizing a remapping operation. Further, the dot product may be clamped to [0.0, 1.0]. As yet another option, the clamped dot product may be stored in the color components utilizing a smearing operation. Still yet, the color components may include an A-component, an R-component, a G-component, and/or a B-component.
In another embodiment, the additional shading calculation may be repeated. Further, the output of the shading calculations may be combined. As an option, texture information may be retrieved using the texture coordinates which are associated with the output of the shading calculation. As such, the additional shading calculation may be performed using the texture information.
Another system, method and computer program product are provided for performing shader calculations in a graphics pipeline. Initially, a shading calculation is performed in order to generate output. Next, an additional shading calculation is carried out. Such additional shading calculation converts the output of the shading calculation into a floating point format. Further, a dot product (dp) is calculated utilizing the converted output and texture coordinates. Still yet, texture information is retrieved utilizing the dot product (dp).
In one embodiment, the output may be converted into the floating point format utilizing a remapping operation. Still yet, the texture informa
Cooley & Godward LLP
Nguyen Kimbinh T.
NVIDIA Corporation
Zimmerman Mark
LandOfFree
System, method and computer program product for programmable... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with System, method and computer program product for programmable..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and System, method and computer program product for programmable... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3153987