Computer graphics processing and selective visual display system – Computer graphics processing – Three-dimension
Reexamination Certificate
1998-05-27
2002-06-04
No, Cliff N. (Department: 2671)
Computer graphics processing and selective visual display system
Computer graphics processing
Three-dimension
C345S420000, C345S215000
Reexamination Certificate
active
06400364
ABSTRACT:
BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates to image processing in a virtual reality space.
2. Related Background Art
The following methods are known which realize simulation of a virtual world in a virtual reality (techniques of providing human sensory organs with information generated by a computer to allow pseudo-experiences of human activities in an imaginary world or in a remote space).
For example, a three-dimensional (3D) position/direction detector (e.g., FASTRAK of 3SPACE Corporation measures a 3D position and an Eulerian angle in a real space by magnetic conversion techniques) attached to a head of a player who experiences a virtual reality detects geometrical data. In accordance with this data, a computer calculates an image of a previously input model (3D configuration data of an object) while considering its spatial and geometrical position. This calculated image is displayed on a head mount display, e.g., i-glasses of Virtual-io Corporation to make the player experience virtual world simulation.
In such a system realizing a virtual reality, an image to be viewed by a player is generally generated by 3D computer graphics (CG) to be described hereinunder.
In 3D CG for forming an image representing a 3D object, two main operations “modeling” and “rendering” are generally performed to form an image.
Modeling is an operation of supplying a computer with data such as a shape, color, surface property and the like of an object to be displayed as an image. For example, if a human image is to be formed, data such as what surface shape of the human image is, what color of which area of the face is, what light reflectivity is, and the like, is generated and stored in the format usable by the next rendering operation. Such a collection of data is called an object model.
For example, in modeling a cubic shape such as shown in
FIG. 17
, first a modeling coordinate system is formed which has as its origin, one vertex of the cube. Coordinate data of eight vertexes and surface loop data of the cube are determined in the coordinate system, for example, as shown in
FIGS. 18A and 18B
. A collection of coordinate data and surface loop data is used as model data of the object.
Rendering is an operation of generating an image of an object as viewed from a certain position, after the model is formed. In order to perform rendering, therefore, in addition to the model, conditions of a viewpoint and illumination are required to be considered. The rendering operation is divided into four works including “projection conversion”, “shielded surface erasing”, “shading” and “devising for reality”.
With “projection conversion”, the position on the screen of each coordinate value representing a model as viewed from a position of a viewpoint is calculated to convert it into a coordinate value on the screen.
FIG. 19
shows four coordinate systems used for the projection conversion. The shape data of an object defined in the modeling coordinate system is first converted into shape data in a world coordinate system (used for the model representing an object). Thereafter, viewing conversion (visual field conversion) is performed to direct a selected camera to one of various directions and take the image of the object. In this case, the data of the object represented in the world coordinate system is converted into the data in a viewpoint coordinate system. For this conversion, a screen (visual field window) is defined in the world coordinate system. This screen is a final projection or picture plane of the object. The coordinate system for defining this screen is called a UVN coordinate system (screen coordinate system). If all objects in front of the viewpoint are drawn, a calculation time may become unnecessarily long and it is therefore necessary in some cases to determine a working area. The working area is called a viewing volume (visual field space). This determination process is called clipping. In the viewing volume, the surface nearest to the camera is called a near or front clipping plane and the surface remotest from the camera is called a far or rear clipping plane. The visual field conversion is performed by moving the screen in one of various directions. After the visual field conversion is performed, a cross point on a picture plane (screen) of a line extending between the viewpoint and each point of the 3D shape of the object in the space is calculated to obtain an image of the object projected upon the screen as shown in FIG.
20
. In this case, however, the image is formed through central projection which has a definite distance between the viewpoint and the picture plane. With this projection conversion, therefore, the data in the viewpoint coordinate system is converted into the data in the UVN coordinate system.
Next, the “shielded surface erasing” is performed to judge which area of the model can be viewed or cannot be viewed from the present viewpoint. Typical approaches to the shielded surface erasing algorithm are a Z buffer method and a scan line method. After it is determined by the shielded surface erasing which area can be viewed, illumination is taken into consideration to judge which area is viewed in what color and at what brightness. The determined color is drawn on the screen or pixels. This process is the shading work.
The “devising of reality” work is generally performed at the end of rendering. This work Is performed because an image formed by the “projection conversion”, “shielded surface erasing” and “shading” becomes much different from a real object and gives no interest to the player. The reason for this is that these processes are performed on the assumption that the surface of an object is an ideal flat plane or a perfectly smooth curve plane capable of being represented by formulas or that the color of each surface is the same over the whole area thereof. One typical method of avoiding this and making an image more realistic, is texture mapping. With this texture mapping, a prepared two-dimensional pattern is pasted (mathematically speaking, an image of the pattern is mapped over) on the surface of an object model in a 3D space. This process aims at making an object constituted of monotonous surfaces be viewed as if it has complicated surfaces. With this process, a simple cubic model can be viewed as a metal object or a stone object.
After the “projection conversion”, “shielded surface erasing”, “shading” and “devising of reality”, an image of the object in the UVN coordinate system is finally converted into an image in a device coordinate system which is then displayed on the display device. One rendering process is completed in the above manner.
FIG. 21
shows an image (with its background being drawn fully in black) which is an image projected on the screen shown in
FIG. 20
, converted into the image in the device coordinate system, and displayed on the display screen. The device coordinate system is used when pixels and dots of an image are displayed, and is assumed to be the coordinate system same as that of the display screen (a and b in
FIG. 21
represent the numbers of pixels of the display screen).
In forming CG animation by giving a motion to an image (CG image) formed by the method described above, the two methods are mainly used.
With the first method, an object model is placed in a 3D space. Each time the illumination condition, viewpoint condition (position, direction, and angle of view of the viewpoint), the model shape and color and the like are changed slightly, to carry out rendering. After a series of animation images are formed or after each image is rendered, the images are recorded frame by a frame (frame-recorded) in a video tape recorder or the like. After all images are recorded, they are reproduced by a reproducing apparatus. With this method, a time required for image rendering may be prolonged in an allowable range (although depending on a time required for one image rendering and on a time required for forming all animation images). It is therefore possible to form a high quality imag
Akisada Hirokazu
Tamai Shunichi
Canon Kabushiki Kaisha
Fitzpatrick ,Cella, Harper & Scinto
Nguyen Kimbinh T.
No Cliff N.
LandOfFree
Image processing system does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Image processing system, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Image processing system will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-2930353