Computer graphics processing and selective visual display system – Computer graphics processing – Three-dimension
Reexamination Certificate
2001-10-23
2004-04-20
Nguyen, Phu K. (Department: 2671)
Computer graphics processing and selective visual display system
Computer graphics processing
Three-dimension
Reexamination Certificate
active
06724386
ABSTRACT:
BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates generally to systems and processes for image editing and, more particularly, to a system and process for replacing unwanted geometry with new geometry utilized in a 3-dimensional (“3-D”) tracking process.
2. Description of Related Art
Media productions have benefited in recent years from technical advances in animation and in computer generated images. Increasingly, producers and directors of media productions are creating scenes comprising a combination of real and computer generated images that appear to be interacting with each other and co-existing within the same real or virtual space. These new techniques can create realistic special effects such as computer generated dinosaurs or mice interacting with real people, or the destruction of recognizable cities by computer generated asteroids.
These new techniques are used in media productions such as motion pictures, television shows, television commercials, videos, multimedia CD-ROMs, web productions for the Internet/intranet, and the like. The process involves at least three general phases:
1) scene recording or production
2) camera and environment 3-D solving
3) compositing and other 2D image manipulation phases.
The first phase creates and captures the actual media images (i.e., live action footage, animation, computer graphics) used in the finished piece. Live action footage may be recorded, for example, in media formats such as film, videotape, and audiotape, or in the form of live media such as a broadcast video feed. The media information is captured through devices like cameras and microphones from the physical world of actual human actors, physical models and sets. Computer generated images (CGI), such as computer graphics, computer animation, and synthesized music and sounds, may be created by using computers and related electronic devices to synthetically model, generate and manipulate images and sounds, typically under the guidance and control of a human operator.
The next phase of 3-D solving re-creates the scene recorded in the first phase within the computer as a virtual 3-D environment and associated camera motion(s) and attributes. The identical recreation of the original scene within the computer allows computer graphic images to be generated (rendered) as if they were real objects shot by the original production camera. These rendered images can include computer graphic characters such as dinosaurs, a computer graphics mouse, computer graphics building, or whatever is required based upon the scene. Additionally, the rendering process can create additional elements and aspects of the basic computer graphics image such as shadows, reflections, and any other conceivable artifact that would have been present if the computer graphics character were actually a real object photographed in the original real, production environment.
The third phase uses compositing techniques which combine, integrate, and assemble these real (production) and computer generated (rendered) images, which may have been produced out of sequence and through various methods, into a coherent finished product, using operations such as editing, compositing and mixing. This process should result in real and computer generated images that appear to co-exist and interact as if they were captured or created at the same time, in the same space, from the same viewpoint. In post-production, the images are combined (or “composited”) to generate believable results by adjusting the visual characteristics of the rendered object as well as its associated rendered artifacts (such as the shadows) to match the original production scene. These adjustments may include manipulating colors, brightness, gamma, size, the appearance of film grain to match the original image if it were shot on film, and many other visual attributes as appropriate for that particular scene and its medium. The compositing phase may also fix some or all errors, objectionable image aspects, and/or other visual problems introduced during independent and often very separate production steps.
Recording the original production scene sometimes requires special shooting conditions such as bluescreen photography, and/or the scene may require special or unusual sets, equipment, and foreign objects which may be present in the scene's original recording. However, some or all of the equipment, objects, and/or conditions are not intended to appear in the final product. Nevertheless, they may be necessary in order to produce the scene. As examples, they may be necessary for specialized or particular lighting requirements, safety concerns, proper execution of the final special effects, and other similar considerations.
One of the difficulties of combining and integrating computer graphics images into live action scenes occurs during the second step of 3-D solving. This phase attempts to solve and match all of the necessary three dimensional characteristics (such as positioning and movement) of the live action scenes. Even though the physical set of a live/recorded production is inherently 3-D, the recorded result is a 2-dimensional (“2-D”) image from the camera's perspective. It is therefore very difficult to recreate and match the 3-D positioning and movement of the CGI to the recorded live action scene. Human visual acuity is sufficiently precise to detect anomalies in the relative scale, positioning, and dimensional relationship of the CGI to the live action scene. These relative characteristics must be accurately matched to obtain realism and present the viewer with a seamless view of the composite image.
Thus, it is advantageous to have a 3-D model of the live action scene to assist in integrating the CGI into the live action scene. One method for generating such a 3-D model is commonly referred to as 3-D tracking (also referred to in the present disclosure as “3-D matchmove” or “3-D solve”).
To assist in the 3-D tracking process, several tracking markers or objects may be placed within the scene that is to be recorded even though they are considered foreign objects relative to the expected final image. These markers assist in determining the 3-D coordinates of the camera motion and other camera-related parameters, as well as creating a 3-D recreation of the objects in the scene and their 3-D spatial relationship to each other. Tracking markers can be sticker dots, tennis balls, painted lines, or other markers or objects that will be discernible in the recorded image of the scene. The tracking markers are usually placed on features within the scene and are specifically placed and designed to stand out in the recorded image and assist in the 2D tracking and 3-D solving process. In general, the more tracking markers there are within the scene, the more accurate will be the 3-D solve.
The scene is then recorded. The recorded scene is then scanned or similarly imported into a computer graphics program used for 2-D point or 2-D feature tracking (such as Combustion or Composer) or into a 3-D graphics scene recreation and solving application which includes 2-D point or 2-D feature tracking (such as 3D Equalizer or Matchmover). The tracking markers within the scene may then be tracked in 2-D screen space. A 3-D graphics application (such as 3D Equalizer or Matchmover) may then use tracking algorithms known in the art to mathematically convert (i.e., solve) the 2-D tracking information of the recorded scene into 3-D coordinates of the scene (a “3-D map”).
Alternatively, human computer users can use their knowledge to manually try to solve the necessary recreation of the camera and the environment. Whether the information and 3-D map is gathered through a special application and solving algorithm or through manual means, this 3-D map of the original production scene in the computer's virtual space is used to assist in rendering the desired computer graphics images.
Once the 2D screen space motions have been converted, or solved, into a 3-D map, the 3-D map may be exported from the 3-D graphics solving ap
Foley & Lardner
Nguyen Phu K.
Sony Corporation
LandOfFree
System and process for geometry replacement does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with System and process for geometry replacement, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and System and process for geometry replacement will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3209030