Dynamic view-dependent texture mapping

Computer graphics processing and selective visual display system – Computer graphics processing – Three-dimension

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C345S421000

Reexamination Certificate

active

06525731

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Field of the Invention
This invention relates to computer-generated graphics and interactive display systems in general, and, in particular, to the generation of three-dimensional imagery using an efficient and compact form.
2. Description of the Related Art
The use of 3D imagery, whether displayed on a screen or in virtual reality goggles, is becoming increasingly important to the next generation of technology. Besides CAD/CAM (Computer Aided Design/Computer Aided Modelling), 3D imagery is used in videogames, simulations, and architectural walkthroughs. In the future, it may be used to represent filing systems, computer flowcharts, and other complex material. In many of these uses, it is not only the image of a 3D model that is required, but also the ability to interact with the image of the 3D model.
The process of creating and displaying three-dimensional (3D) objects in an interactive computer environment is a complicated matter. In a non-interactive environment, 3D objects require memory storage for the shape, or geometry, of the object depicted, as well as details concerning the surface, such as color and texture. However, in an interactive environment, a user has the ability to view the 3D object from many angles. This means that the system has to be able to generate the images of geometry and surface details of the 3D object from any possible viewpoint. Moreover, the system needs to generate images from these new viewpoints quickly, in order to maintain the sensation that the user is actually interacting with the 3D object.
As an example, consider viewing a 3D model of a house on a computer system with a monitor and a joystick. Although you may be viewing it on the flat computer screen, you will see the shape of the house, the black tiles on the roof, the glass windows, the brick porch, etc. But if you indicate “walk to the side” by manipulating the joystick, the viewpoint on the screen shifts, and you begin to see details around the side corner of the house, such as the aluminum siding. The system needs to keep up with your manipulations of the joystick, your “virtual movement,” in order to provide a seamless, but continuously changing image of the 3D object, the house. So the system needs to redraw the same object from a different perspective, which requires a large amount of memory and a great deal of processing.
The problem of memory usage becomes even more acute when in a networked environment, where the image is being shown on a client system, while the original 3D object information is stored on a server. The client display system will have to receive the 3D object information over a communication link, thereby slowing down the process. In most cases, a compromise will have to be made between the interactive speed, how quickly the display will redraw based on user input, and the accuracy of the 3D object, how many details can be recalculated with each change in perspective. Although the following discussion will assume a networked environment, the problems addressed are also applicable to stand-alone computer systems, since an analogous situation exists between the hard drive (server) and the local memory (client) in a computer.
Before addressing the memory and processing problems, a brief discussion of 3D imaging is in order. As indicated above, the features of 3D objects can be separated into geometry, the shape and volume of the object, and surface details, such as texture, color, and shading. The first issue is how these two attributes, geometry and surface details, are stored and transmitted. The storage of geometry involves the description of the various edges and vertices of the object in 3D space. One way to speed the network time of transmitting and displaying a 3D object is to simplify the geometry, but this can make it hard to represent the original surface information on the reduced geometry, as the vast majority of simplification algorithms use only a geometric approximation and ignore the importance of surface details. A technique such as “texture mapping” [Blinn] maps an image of the rendered object onto the simplified geometry, but it is difficult to create and map images onto a surface of arbitrary topology. Furthermore, the corresponding texture coordinates would have to be sent along with the model, which would increase the transmission load.
Texture mapping is one of a class of techniques called “image-based rendering,” where an image of the surface details are “rendered” for each new perspective on the 3D object. Every time the user moves around the 3D object, a new surface image must be “rendered” from the new perspective. The image of surface details is taken and stored in various ways. One approach, which is used in texture mapping, is to break the surface of the 3D object into polygons, render every surface polygon as a small image, and then assemble all the images into a large montage that acts as a single texture image for the entire surface [Cignoni98]. Each polygon would then appear fully rendered and without any occlusion somewhere in the montage image, and the corresponding texture coordinates of the polygon corners would be well-defined. Occlusion occurs when some surface details are blocked from the user's perspective. Also in this approach is a method that packs the images into the montage efficiently to minimize wasted space in the texture image.
There are other approaches to taking a surface detail image. A single texture image of an object can be created by doing a “flyover” of the 3D object. This is equivalent to passing a camera over every surface of the object and taking pictures either continuously, creating one large seamless image, or discretely, creating multiple images corresponding to coverage of the entire 3D object.
Some image-based rendering techniques use these surface detail images directly as hardware texture maps [Cignoni98, Rademacher, Foran98, Erdahl97, Jackson96]; in other words, they use one or more images to completely describe the texture of the 3D object from any perspective the user may take. Others rely on data structures and algorithms that are implemented in software [Harashima98, Oliveira98, Oliveira99]; meaning that new perspectives are created or interpolated from image data by computer processing. Furthermore, some approaches are optimized for creating as realistic view as possible from a finite set of images, while others seek a compromise between accuracy and interaction speed.
There are many problems of memory usage, data transmission and excessive processing when employing the above techniques. For instance, the image-based rendering techniques that rely extensively on software algorithms to perform elaborate operations during every change of viewpoint are inapplicable to CAD/CAM and related applications that demand both high performance and high accuracy. This is even more of a problem in client-server applications in which the client may be much less powerful than the server machine. For the rest of this application, we will focus on texture mapping as the best technique for a client-server application.
When using texture mapping over a client-server
etworked system, there is the need to send additional information such as texture coordinates, details, and color along with the geometry. To reduce bandwidth requirements it is important to send as little data as possible while still allowing the client to reconstruct the scene. However, this is a problem with complex rendering processes that don't reduce to simple 2D renderings of a 3D object with a given viewpoint. Complex rendering requirements will more tightly couple the process that creates the viewable geometry with the process that renders the images, which is inappropriate, for example, when the creation process is performed on the server and the rendering process is performed on the client. Thus, in a client-server system, multiple 2D renderings or multiple texture maps of the 3D object are preferable.
When using the polygon method of texture mapping,

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Dynamic view-dependent texture mapping does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Dynamic view-dependent texture mapping, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Dynamic view-dependent texture mapping will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3139013

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.