Method and apparatus for dynamically reconfiguring the order...

Computer graphics processing and selective visual display system – Computer graphics processing – Three-dimension

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C345S421000, C345S502000, C345S506000, C345S532000, C345S536000, C345S582000

Reexamination Certificate

active

06636214

ABSTRACT:

FIELD OF THE INVENTION
The present invention relates to computer graphics, and more particularly to interactive graphics systems such as home video game platforms. Still more particularly this invention relates to reconfiguring a 3D graphics pipeline to move hidden surface removal to different locations within the pipeline depending on rendering mode (e.g., alpha thresholding).
BACKGROUND AND SUMMARY OF THE INVENTION
Many of us have seen films containing remarkably realistic dinosaurs, aliens, animated toys and other fanciful creatures. Such animations are made possible by computer graphics. Using such techniques, a computer graphics artist can specify how each object should look and how it should change in appearance over time, and a computer then models the objects and displays them on a display such as your television or a computer screen. The computer takes care of performing the many tasks required to make sure that each part of the displayed image is colored and shaped just right based on the position and orientation of each object in a scene, the direction in which light seems to strike each object, the surface texture of each object, and other factors.
Because computer graphics generation is complex, computer-generated three-dimensional graphics just a few years ago were mostly limited to expensive specialized flight simulators, high-end graphics workstations and supercomputers. The public saw some of the images generated by these computer systems in movies and expensive television advertisements, but most of us couldn't actually interact with the computers doing the graphics generation. All this has changed with the availability of relatively inexpensive 3D graphics platforms such as, for example, the Nintendo 64® and various 3D graphics cards now available for personal computers. It is now possible to interact with exciting 3D animations and simulations on relatively inexpensive computer graphics systems in your home or office.
A problem graphics system designers are constantly confronting is how to speed up the graphics processing. Reduced image processing time is especially important in real time graphics systems such as interactive home video games and personal computers. Real time systems generally are required to produce thirty new image frames each second.
To achieve higher speed, typical modern 3D graphics systems use a graphics pipeline to render the image. Information specifying an image goes into one end of the pipeline, and the rendered image comes out at the other end of the pipeline. The pipeline includes a number of different processing stages performing the various steps involved in rendering the image (e.g., transformation to different coordinate systems, rasterization, texturing, etc.) all at the same time. Just as you can save time laundering clothes by folding one load of laundry while another load is in the washing machine and still another load is in the dryer, a graphics pipeline saves overall processing time by simultaneously performing different stages of processing as pixels move down the pipeline.
The amount of time it takes for information to get from one end of the pipeline to the other depends on the “length” of the pipeline—that is the number of processing steps the pipeline performs to generate screen pixels for display. Shorter pipelines can process information faster, but image complexity is limited by the reduced number of image processing stages. The additional image processing stages of a longer pipeline can be used to produce more complicated and interesting images at the expense of increased processing time.
A common technique in use in many modern graphics systems today to increase speed performance allows the application programmer (e.g., video game designer) to change the length of the pipeline by turning off graphics pipeline features and processing stages that are not currently being used. For example, the application programmer can selectively turn on and off optional processing operations (e.g., texturing, texture filtering, z buffering, etc.) that take a lot of time to perform. Permitting the application programmer to choose from a menu of processing operations provides great flexibility. If the application programmer is interested in the fastest possible rendering, he or she can select cheaper (in terms of processing time) pipeline processing operations and forego the increased image complexity that would be obtainable by more expensive options. An application programmer interested in more complex images can activate, on an a la carte basis, more complex and expensive functions on an as-needed basis at the cost of increased processing time.
Hidden surface removal is an expensive but important operation performed by nearly all modern 3D graphics pipelines. To create the illusion of realism, it is important for the graphics pipeline to hide surfaces that would be hidden behind other, non-see-through surfaces. Letting the viewer see through solid opaque objects would not create a very realistic image. But in the real world, not every surface behind another surface is hidden from view. For example, you can see objects through transparent objects such as window panes. To provide realism, a 3D graphics pipeline should be able to model transparent objects as well as solid (opaque) objects, and perform hidden surface removal based upon whether or not an object in front of another object is transparent. Modern graphics systems model transparency using an additional channel called the “alpha channel” and perform “alpha thresholding” and alpha blending to achieve transparency and other effects.
One common way to perform hidden surface removal is to use something called a depth buffer. The depth buffer is also called the “z buffer” because the z axis is the depth axis. The z buffer typically provides at least one storage location for each pixel (picture element) of the image. When the graphics pipeline writes a pixel on a surface into a color frame buffer that stores the image, it also writes the depth of the surface at that pixel location into a corresponding location in the z buffer. Later, when the graphics pipeline is asked to render another surface at the same image location, it compares the depth of what it has already rendered with the depth of the new surface, relative to the viewpoint. If the new surface is in front of the already rendered surface, the graphics pipeline can discard the new surface information—since the new surface will be hidden from view. If the depth of the newly presented surface is closer to the viewer, then the graphics pipeline can replace the previously rendered pixel with a new pixel for the new surface because the new surface will hide the previously rendered surface. If the new surface is transparent, then the graphics pipeline may blend the newly presented and previously-rendered surfaces together to achieve a transparency effect.
Since hidden surface removal tends to be a rather expensive operation in terms of speed performance, it is often possible to turn off hidden surface removal at certain times (e.g., while redrawing a background image or drawing certain kinds of geometry that do not require such processing). However, altogether eliminating hidden surface removal is usually not desirable because many or most 3D images require hidden surface removal to provide realism.
The texturing stage is another processing stage found in typical modern graphics pipelines. To provide an increase in image complexity without a corresponding increase in the number of polygons that the graphics pipeline must render, graphics system designers often include the ability to apply two-dimensional textures to polygon surfaces within an image. For example, when creating an image including a tree, it is possible to draw a rectangle or triangle and place a two-dimensional picture or other image of a tree onto that surface. Texturing avoids the need to model each leaf and branch of the tree with one or more polygons, and can therefore substantially save the amount of processing time required to generate images.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Method and apparatus for dynamically reconfiguring the order... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Method and apparatus for dynamically reconfiguring the order..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method and apparatus for dynamically reconfiguring the order... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3166106

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.