Rendering perspective views of a scene using a...

Computer graphics processing and selective visual display system – Computer graphics processing – Three-dimension

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

Reexamination Certificate

active

06232977

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates to the field of image construction. More particularly, the present invention relates to the field of computerized rendering of perspective views of a environment map onto a display.
2. Art Background
Virtual reality encompasses a wide variety of sensual input and output techniques for immersing a human operator in a computer-synthesized virtual environment. A subset of virtual reality, termed virtual environment navigation, is related to the generation of visual images corresponding to what a virtual camera sees while moving in the virtual environment. To maintain the sensation of virtual reality, the generation of views of the virtual environment must be performed in real-time in response to operator input.
One method of virtual environment navigation is accomplished by constructing a virtual environment using three-dimensional (3D) objects, placing a virtual camera in the virtual environment under an operator's control, and then presenting views to a human operator via a real-time rendering process which is analogous to photography. This method of virtual environment navigation suffers from several disadvantages. First, constructing virtual environments from 3D geometric objects is a laborious process and is very difficult to automate. Also, rendering 3D geometric objects in real-time usually requires specialized 3D rendering hardware. Further, the rendering time varies with the number of geometric objects in the scene, making it difficult to maintain real-time performance in scenes having a large number of geometric objects.
Another technique for navigating virtual environments has been developed in which views seen by a virtual camera are generated by processing digitized panoramic images known as environment maps. An environment map may be created by sampling a three-dimensional (3D) scene in parametric increments along a surface of revolution defined by rotating a profile curve about an axis. Examples of surfaces of revolution include spheres, cylinders, cones, etc. Since points on the surface of revolution are used to define points in the scene which in turn are sampled to obtain intensity values ultimately stored in an environment map, environment maps are commonly referred to by the geometry of the surface of revolution. For example, a spherical environment map can be created by incrementing angular parameters (e.g., angles of latitude and longitude) defining points along the surface of a sphere. Points sampled from the scene are defined by vectors extending from the viewpoint of a virtual camera (typically at the center of the sphere) through the spherical surface points and into the scene. By sampling each such point in the scene to obtain a pixel value, a rectangular pixel map is obtained in which offsets along each axis correspond to respective angular sampling increments. Depending on the nature of the surface of revolution used to establish parametric sampling increments, the environment map may represent a complete or partial view of the 3D scene. Since an environment map represents a scene perceived from a particular viewpoint, perspective views of arbitrary orientation and field of view can be generated by retrieving the appropriate pixel data from the environment map and presenting it on a display. This is termed rendering the environment map.
Rendering an environment map is complicated by the fact that straight lines in a scene may not be preserved (i.e., they may become curved) when the scene is sampled to produce the environment map. This distortion must be corrected before an accurate perspective view can be presented to a human operator. Because the correction required is dependent on the orientation of the perspective view, the correction must be performed on-the-fly when a user is interactively rotating or zooming the virtual camera.
One method for interactively rendering an environment map is disclosed by Miller et al. in U.S. Pat. No. 5,446,833 (Miller). In this approach, a two-level indexing scheme is used to provide two axes of rotation in the generation of perspective views of a spherical environment map. Miller employs a screen look-up table to permit rotation of the perspective view about a horizontal axis. The values read from the screen look-up table are themselves used to index a parametric look-up table. The parametric look-up table permits rotation of the perspective view about the polar axis of a sphere and contains indices to the spherical environment map. One disadvantage of this approach is that three different data stores must be read in order to generate a perspective view: the screen look-up table, the parametric look-up table, and finally the spherical environment map itself.
Another method for interactively rendering an environment map is disclosed by Chen et al. in U.S. Pat. No. 5,396,538 (Chen). In this approach, the rendered environment map is based on a cylindrical surface projection and is referred to as a cylindrical environment map. A cylindrical environment map provides a 360° field-of-view around the center axis of the cylinder and a more limited field-of-view vertically. Chen employs a two-step approach to render the environment map. In the first step, the axial scanlines on the cylindrical surface are mapped to a vertical plane having scanlines parallel to those on the cylindrical surface. This step is termed vertical scaling. In the second step, termed horizontal scaling, the vertical plane is mapped to the viewing plane. One disadvantage of this approach is that an intermediate image must be constructed in the vertical scaling step, requiring additional time and buffer space. Another disadvantage of this approach is that it is limited to use with a cylindrical environment map and therefore has a restricted vertical field of view.
As is apparent from the preceding discussion, environment map rendering can be quite complex and requires either large look-up tables or significant amounts of computational power. Further, other rendering functions, such as anti-aliasing require additional computational power. In most personal computer systems, the effect of such resource demands is to limit the rate at which perspective views may be rendered, resulting in an undesirable delay between selection and rendering of a perspective view.
It would be desirable, therefore, to provide a technique for rendering an environment map obtained based on a surface of revolution of generalized geometry in a manner which allows perspective views to be rapidly rendered without restricting the field of view and without requiring inordinate computational power or memory. This is achieved in the present invention.
SUMMARY
A method and apparatus for generating perspective views of a scene represented by an environment map is disclosed. With a viewing position at the center of a surface of revolution used to map the scene into an environment map, different views of the scene may be rendered by rotating the viewing direction either horizontally or vertically. A horizontal rotation will cause panning of the scene from “side to side”. A vertical rotation will cause panning of the scene “up or down”. Depending on the geometry of the surface of revolution, panning may be unrestricted or may be restricted in either the vertical or horizontal directions. A scanline-coherent look-up table is used to provide environment map indices according to the degree of vertical rotation of the view, while horizontal rotation is provided by offsetting the environment map indices obtained from the scanline-coherent look-up table. By translating the address of look-up table entries according to vertical rotation and translating the address of environment map entries according to horizontal rotation, perspective views of a scene may be generated quickly and accurately without the need to buffer an intermediate image.
The generation of a perspective view of a scene involves the computer-implemented steps of: providing an environment map containing pixel values representing the scene; ge

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Rendering perspective views of a scene using a... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Rendering perspective views of a scene using a..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Rendering perspective views of a scene using a... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2450556

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.