Three dimensional rendering including motion sorting

Computer graphics processing and selective visual display system – Computer graphics processing – Adjusting level of detail

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C345S473000

Reexamination Certificate

active

06806876

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention is directed to visualization methods and, more particularly, to three dimensional (3D) rendering techniques.
2. Description of the Background
3D images are generated, or rendered, by 3D pipelines. A 3D pipeline may be represented as a series of steps such as those shown in FIG.
5
. The steps are often implemented by an application program running on a computer, with or without specialized graphics acceleration hardware, in conjunction with memory devices. The memory devices store information about objects, lighting, view points, and other information needed to generate a 3D image. The goal of the rendering operation is to produce in a frame buffer a 2D image that is to be displayed on a monitor.
Scenes are defined by a data structure referred to as the scene database. The scene database contains models of objects in the scene as well as information relating the objects to one another. The viewpoint is important because it determines how the objects are seen in relation to one another. The viewpoint may be thought of as the position of an observer, and as the position of the observer changes, the relationships between the objects changes. For example, as the viewer moves from the front to the right side of a first object, a second object that is behind the first object may come into view while a third object that is to the left of the first object may be blocked or occluded by the first object. Also, a light source that is behind the first object will interact with the first, second and third objects differently depending upon whether the light source is in front of the user or to the right of the user. Thus, it is necessary for the rendering pipeline to be able to manipulate objects based on the viewpoint. The manipulation of objects, light sources, and the like based on a viewpoint is referred to as transforming the data, because the individual components making up an image are all transformed to a common viewpoint, referred to as view space.
In modern rendering pipelines, objects are represented by a series of triangles or other primitive shapes (primitives). Each triangle has three vertices in three dimensions, represented by x, y and z coordinates. Meshes of individual triangles can be built up from lists of vertices to represent objects. Once a common set of vertices is prepared, the next step is to convert the coordinates for the vertices from view space to screen space. That process is referred to as triangle setup.
Triangle setup requires that the 3D scene be changed so that it may be stored in a 2D frame buffer to enable the image to be displayed on a screen, which is made up of pixels. Triangle setup is performed triangle by triangle. However, some of the triangles of the 3D scene might be covered by other triangles that are in front of it, but at this stage it is unknown to the rendering pipeline which triangles are covered or partly covered and which are not. As a result, the triangle setup step receives all three vertices for each triangle. Each of these vertices has an x, y and z coordinate which defines its place in the three 3D scene. The triangle setup step fills each triangle with pixels. Each of the pixels in the triangle receives the x and y coordinate for the place it occupies on the screen, and a z coordinate which holds its depth information. Each of the pixels for the triangle are sent one by one to the rendering step.
If the triangle setup step receives a triangle that is somewhere in the background of the scene, where it is partly or completely covered by triangles in front of it, it will still perform its normal function which is to convert the triangle into pixels. After that, those pixels are sent to the rendering step. Here, in the rendering step, details such as texture, shading and lighting are addressed. During the rendering step, the z buffer (the memory with depth information) is accessed and the z coordinate of the pixel at the spot where the new pixel is supposed to be drawn in is read. If the value in the z buffer is zero, which means that nothing has been drawn at this location yet, or if the information shows that the new pixel is in front of the value that was found in the z buffer, the pixel will be rendered and the z coordinate of the pixel just rendered will be stored in the z buffer. The problem, however, is that the rendering pipeline has wasted a clock cycle rendering the old pixel which has now been replaced by a new pixel. Furthermore, even if the new pixel is rendered and stored, it is possible that a later triangle will happen to cover this pixel, again causing an overwrite. Thus, it is seen that many pixels are rendered unnecessarily. The rendering pipeline is wasting valuable rendering power for the drawing, or at least the processing, of pixels that will never be seen on the screen. Each of those uselessly rendered pixels is taking away fill rate.
Another problem with rendering pixels that will not be seen in the final image is with the z buffer. The z buffer is accessed twice for each pixel in each triangle of the scene, which represents several times the screen resolution. Such z buffer accesses cost an immense amount of memory bandwidth. As a result, the z buffer is the most accessed part of the local video memory associated with the 3D rendering pipeline.
One technique for reducing the number of triangles that must be rendered is for the 3D application to determine when objects may be ignored. For example, if a viewpoint is looking through a doorway into a room, many of the objects will not be visible and may thus be ignored. Such a process is referred to as culling. Another process referred to as clipping involves the use of bounding boxes to determine if portions of objects are occluded. Culling and clipping may be used to reduce the number of triangles that must be rendered.
Even with culling and clipping, however, the number of triangles to be rendered in a highly detailed scene requires a tremendous amount of computing power and memory bandwidth. Consider a sophisticated video game or virtual tour in which the viewer is walking down the center of an exhibit hall in which dozens of individual objects are within view, and the view is constantly changing as a result of the motion of the viewer. As a result, other techniques are needed to enable real time rendering.
One technique which has been developed is the multi-resolution mesh. A multi-resolution mesh is used to create at design time models of an object using different numbers of polygons depending upon the degree of resolution which is required.
FIG. 6A
represents an automobile modeled with 200 polygons;
FIG. 6B
represents the same automobile modeled with only 100 polygons, while
FIG. 6C
represents the same automobile modeled with only 75 polygons. When a determination is made, for example, that the object is in the background, a lower resolution model of the object is retrieved and used by the rendering pipeline. By reducing the number of polygons, the rendering operation is simplified.
Despite efforts to simplify the rendering process, consumer demands for more realism in real time 3D imaging continue to push hardware and software to their limits. The multi-resolution mesh approach, because the resolution is determined at design time rather than run time, is not scalable and cannot adapt to different platforms of varying rendering capabilities. Accordingly, the need exists for a technique which simplifies the rendering process at run time thereby enabling real time 3D imaging at a level of detail acceptable to consumers.
SUMMARY OF THE PRESENT INVENTION
The present solves the problems of the prior art by providing a method of reducing at run time the number of primitives that need to be used to render an object. The method of the present invention determines that an object is moving within a scene. At run time, the number of primitives used to represent the moving object is reduced. The degree of reduction can be related to the amount of motion, i.e. speed, of the moving object

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Three dimensional rendering including motion sorting does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Three dimensional rendering including motion sorting, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Three dimensional rendering including motion sorting will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3282876

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.