Computer graphics processing and selective visual display system – Computer graphics processing – Three-dimension
Reexamination Certificate
2001-10-31
2004-08-24
Nguyen, Phu K. (Department: 2671)
Computer graphics processing and selective visual display system
Computer graphics processing
Three-dimension
Reexamination Certificate
active
06781583
ABSTRACT:
FIELD OF THE INVENTION
The present invention pertains to the field of computer graphics systems. More particularly, this invention relates to a computer graphics system and method of rendering a scene based upon synthetically generated texture maps.
BACKGROUND OF THE INVENTION
A typical computer graphics system includes a display device having a two-dimensional (2D) array of light emitting areas. The light emitting areas are usually referred to as pixels. Such a computer graphics system typically implements hardware and/or software for generating a 2D array of color values that determine the colors that are to be emitted from the corresponding pixels of the display device.
Such computer graphics systems are commonly employed for the display of three-dimensional (3D) objects. Typically, such a computer graphics system generates what appears to be a 3D object on a two dimensional (2D) display device by generating 2D views of the 3D object. The 2D view of a 3D object which is generated at a particular time usually depends on a spatial relationship between the 3D object and a viewer of the 3D object at the particular time. This spatial relationship may be referred to as the view direction.
U.S. utility application entitled, “DIRECTION-DEPENDENT TEXTURE MAPS IN A GRAPHICS SYSTEM” having Ser. No. 09/329,553 filed Jun. 10, 1999 now U.S. Pat. No. 6,297,834, discloses a method for generating texture maps in a graphics system, and is hereby incorporated by reference. The process by which a computer graphics system generates the color values for a 2D view of a 3D object is commonly referred to as image rendering. A computer graphics system usually renders a 3D object by subdividing the 3D object into a set of polygons and rendering each of the polygons individually.
The color values for a polygon that are rendered for a particular view direction usually depend on the surface features of the polygon and the effects of lighting on the polygon. The surface features include features such as surface colors and surface structures. The effects of lighting usually depend on a spatial relationship between the polygon and one or more light sources. This spatial relationship may be referred to as the light source direction.
Typically, the evaluation of the effects of lighting on an individual pixel in a polygon for a particular view direction involves a number of 3D vector calculations. These calculations usually include floating-point square root and divide operations. Such calculations are usually time consuming and expensive whether performed in hardware or software.
One prior method for reducing such computation overhead is to evaluate the effects of lighting at just a few areas of a polygon, such as the vertices, and then interpolate the results across the entire polygon. Examples of these methods include methods that are commonly referred to as flat shading and smooth shading. Such methods usually reduce the number of calculations that are performed during scan conversion and thereby increase rendering speed. Unfortunately, such methods also usually fail to render shading features that are smaller than the areas of individual polygons.
One prior method for rendering features that are smaller than the area of a polygon is to employ what is commonly referred to as a texture map. A typical texture map is a table that contains a pattern of color values for a particular surface feature. For example, a wood grain surface feature may be rendered using a texture map that holds a color pattern for wood grain.
Unfortunately, texture mapping usually yields relatively flat surface features that do not change with the view direction or light source direction. The appearance of real 3D objects, on the other hand, commonly do change with the view direction and/or light source direction. These directional changes are commonly caused by 3D structures on the surface of a polygon. Such structures can cause localized shading or occlusions or changes in specular reflections from a light source. The effects can vary with view direction for a given light source direction and can vary with light source direction for a given view direction.
One prior method for handling the directional dependence of such structural effects in a polygon surface is to employ what is commonly referred to as a bump map. A typical bump map contains a height field from which a pattern of 3D normal vectors for a surface is extracted. The normal vectors are usually used to evaluate lighting equations at each pixel in the surface. Unfortunately, such evaluations typically involve a number of expensive and time-consuming 3D vector calculations, thereby decreasing rendering speed or increasing graphics system hardware and/or software costs.
In applications such as 3D computer-generated animations, a scene is composed of multiple sequential frames of imagery. During the process of creating a computer-generated graphics presentation, such as, for example, in animations or movies, a sequence of scenes depicting various environments and objects is created and sequentially assembled to form a complete presentation which is displayed to a user via a display device on a sequential basis scene by scene.
Each scene may be composed of a sequence of frames. A frame is typically a 2D static representation of a 3D or 2D object within a defined environment.
Each frame may present a 3D or 2D object or objects from a particular viewing (camera) angle or as illuminated from a particular lighting angle. From frame to frame of the scene, such things as camera angle or lighting angle may change thereby giving the scene a dynamic feel through a sense of motion or change. For example, an object may be viewed in one frame from a head-on viewing position while in a second sequential frame, the same object is viewed from a left side viewing position. When the two frames are viewed in sequence, the object appears to turn from a straight forward position to a position facing to the right-hand side of the object. The process of creating a scene involves assembling a series of images or frames. During this process, it is common for the creator/editor to preview the scene in order to determine progress or status of work done on the scene to that point.
With 3D objects where environments are represented in the scene, each frame will be rendered to add realism and 3D qualities such as shadows and variations in color or shade. In a computer graphics system, this rendering process is computationally intensive and can take significant time to complete, depending on the level of 3D quality desired for the preview, scene display and/or the power of the computer hardware used to carry out the rendering computations. As a result, it is common for creators/authors to opt for a lower level of detail in 3D quality when carrying out a preview of a scene. Some examples of lower level scene quality include wire frame presentations or low-resolution texture mapping. While this does allow the creator/author to preview a general representation of the scene in less time, it falls far short of providing a preview of a true representation of the scene as it will appear in final form.
In order to provide a realistic appearance to the scene and objects therein, texture mapping techniques are used to provide such things as shadow, highlights and surface texture to the objects and scene surfaces through the process of rendering.
Techniques have been proposed for generating an image based texture map in which a scene and/or an object within a scene are photographed for multiple pre-defined camera positions via, for example, a digital or film-based still camera. A variation on this technique holds the camera position constant while a light/illumination source is moved so as to illuminate the object from different angles thereby casting different shadows and highlights on the object. While this technique is useful, it is quite time consuming to generate a series of images for use in generating an image-based texture map, and of course requires the actual physical object or a model of it.
Typically, a
Hewlett--Packard Development Company, L.P.
Nguyen Phu K.
LandOfFree
System for generating a synthetic scene does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with System for generating a synthetic scene, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and System for generating a synthetic scene will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3359544