System and method for generating and playback of...

Computer graphics processing and selective visual display system – Computer graphics processing – Three-dimension

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

Reexamination Certificate

active

06429867

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Field of the Invention
This invention relates generally to the field of computer graphics and, more particularly, to three-dimensional (3D) movies.
2. Description of the Related Art
Traditional motion pictures provide the audience with what may be called a “two-dimensional” experience. Most motion pictures are projected on flat screens with the audience observing the movie from afar. Aside from improvements in sound playback and larger film/screen formats, the movie going experience has not changed significantly for a number of years.
Some films have been produced in visual stereo, e.g., with the audience wearing red-blue or polarized glasses. Similarly, head-mounted displays have also allowed viewers to see visual stereo images and films. But visual stereo has not seen widespread success relative to standard motion pictures. This may be because the stereo-effect is typically limited to a few special effects where a few objects appear to have dimension or “leap out” at the audience. It is likely that the effects that simple visual stereo provides does not outweigh the added cost of production and the hassle of wearing glasses or head-mounted displays to view the image.
However, recent advances in computer graphics and display technologies have hinted at the possibility of a more realistic experience for moviegoers. New sensing technologies such as head-tracking may increase realism from simple visual stereo to a truly three-dimensional (3D) viewing experience.
As used herein, the term “visual stereo” refers to the process of generating two images (i.e., one for the viewer's left eye and another for the viewer's right eye). These images may be referred to herein as stereo component images. In contrast to simple visual stereo, 3D images and movies go one step further by dynamically changing the viewpoint (ideally on a frame-by-frame basis) for each of the two stereo component images. In this way, an object viewed in 3D will not only appear to leap out of the screen, but objects behind it will become obscured or un-obscured as the viewer changes their viewpoint to different positions around the object. In other words, a traditional stereo image of a globe will appear to have depth, but a three-dimensional image of the globe will allow the viewer to move around the globe to see more than one side of it.
Thus, head tracking sensors allow graphics systems to recalculate the component images according to where the viewer is in relation to the display device. Head tracking may allow a coffee table display to display a 3D football game that viewers can view from almost any angle they desire (e.g., from a bird's eye view directly overhead or from the side of the fifty yard line). To change their viewpoint, viewers simply move around the coffee table. The head-tracking sensors will determine where the viewer is and display different component images accordingly. When used in connection with a head-mounted display, head tracking allows panoramic scenes that “wrap-around” viewers without the need for large projection screens that physically surround the viewer. Such head-tracked stereo systems produce a look-around holographic feel qualitatively different from older fixed image displays.
However, these new systems have yet to see widespread acceptance as movie playback devices. One reason is that these systems typically rely upon re-rendering each scene every time the viewer changes their viewpoint. For example, as the viewer's head changes position (or orientation, in some systems), the scene being displayed is re-rendered from scratch using the new viewpoint. To produce a realistic computer-generated image, a tremendous number of calculations must be performed. Even with current advances in microprocessors, most systems still fall significantly short of being able to render detailed realistic scenes in real-time. Given the high frame rates needed for smooth movie-like animation, it may be many years before processing power approaches the levels necessary to completely render complex images (e.g., like those in the movie Toy Story) at frame rates high enough to rival current motion pictures.
Thus, an efficient system and method for generating and playing back realistic 3D movies with viewer-changeable viewpoints in real-time is desired. In particular, a system and method capable of reducing the number of calculations to be performed for real-time playback of 3D movies is desired.
SUMMARY OF THE INVENTION
The problems outlined above may at least in part be solved by a system capable of partially rendering frames without relying upon exact viewpoint information. The partially rendered frames may be rendered to the extent possible without performing viewpoint-dependent processes, and then compressed and stored to a carrier medium. To reduce the amount of data to be stored, the viewer's possible viewpoints may be restricted (e.g., by defining a viewpoint-limiting volume or region).
During playback, the compressed frames are read as a stream, decompressed, and then rasterized for display. As part of the rasterization process, the viewer's viewpoint may be determined and used to perform viewpoint-dependent effects such as some lighting and atmospheric effects (e.g., fogging, specular highlighting, and reflections).
As used herein the term “real-time” refers to performing an action, task, or process rapidly enough so that the user or viewer will not be substantially distracted by the amount of time taken to perform the task. As used herein, the term “including” means “including, but not limited to.”
Movies may be thought of as linear story telling. Thus, in 3D movies the viewer may be like Scrooge (i.e., dragged along by ghosts of Christmas past, present, and future). The viewer can see everything in partial or full three dimensions, but generally may not interact with anything (in some embodiments interaction may be allowed). Furthermore, the viewer can only go to the places where the ghost goes. Note that this covers not only pre-scripted linear story telling, but also most forms of remote viewing (e.g., live sporting events).
Looked at another way, current general-purpose computers take much longer to place and create non-trivial geometry than special purpose rendering hardware does to render it (albeit with simplistic hardwired surface shaders). This trend is likely to get worse rather then better for some time to come. But for pre-scripted linear story telling, nearly all the geometry creating and much of the surface shading can be pre-computed. Thus, the primary run-time task is the rendering of large numbers of colored micropolygons. Therefore, 3D movies may be produced and viewed by rendering (e.g., in real time or near real time) streams of compressed geometry (e.g., in a head-tracked stereo viewing environment).
Linear narratives may be pre-computed using arbitrarily complex animation, physics of motion, shadowing, texturing, and surface modeling techniques, but with the viewpoint-dependent image-rendering deferred until play-back time. The geometry may be produced by a geometry shader that outputs micropolygons in world-space via programmable shaders in a realistic rendering package (e.g., Pixar's RenderMan). The technology may be used for a number of different applications, including entertainment, scientific visualization, and live 3D television.
In one embodiment, a method for generating three-dimensional movies comprises receiving three-dimensional graphics data representing a scene. A viewpoint limitation (e.g., a volume or a two-dimensional region with or without orientation limitations) is specified for said scene. Next, one or more frames representing the scene are partially rendered. These partially rendered frames are then compressed and output to a carrier medium for storage or transmission.
“Partially rendering” may comprise performing one or more non-viewpoint dependent (i.e., viewpoint-independent) calculations, such as some lighting calculations (e.g., viewpoint-independent reflections, viewpoint-indepen

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

System and method for generating and playback of... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with System and method for generating and playback of..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and System and method for generating and playback of... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2891681

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.