Space rendering method, virtual space rendering apparatus,...

Computer graphics processing and selective visual display system – Computer graphics processing – Three-dimension

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

Reexamination Certificate

active

06825837

ABSTRACT:

FIELD OF THE INVENTION
The present invention relates to a space rendering method for presenting a virtual space to the user while downloading and decoding large-size compressed image data such as ray space data, a virtual space rendering apparatus, a cache device, and a storage medium.
BACKGROUND OF THE INVENTION
Many schemes that describe and express a virtual space on the basis of actually sensed images in place of a description based on three-dimensional geometric models have been proposed. Such schemes are called Image Based Rendering (to be abbreviated as IBR hereinafter), and are characterized in that they can express a virtual space with high reality, which cannot be obtained by a scheme based on three-dimensional geometric models. Attempts to describe a virtual space on the basis of a ray space theory as one of IBR schemes have been proposed. See, for example, “Implementation of Virtual Environment by Mixing CG model and Ray Space Data”, IEICE Journal D-11, Vol. J80-D-11 No. 11, pp. 3048-3057, November 1997, or “Mutual Conversion between Hologram and Ray Space Aiming at 3D Integrated Image Communication”, 3D Image Conference, and the like.
The ray space theory will be explained below. As shown in
FIG. 1
, a coordinate system
0
-X-Y-Z is defined in a real space. A light ray that passes through a reference plane P (Z=z) perpendicular to the Z-axis is defined by a position (x, y) where the light ray crosses P, and variables &thgr; and &phgr; that indicate the direction of the light ray. More specifically, a single light ray is uniquely defined by five variables (x, y, z, &thgr;, &phgr;). If a function that represents the light intensity of this light ray is defined as f, light ray group data in this space can be expressed by f(x, y, z, &thgr;, &phgr;). This five-dimensional space is called a “ray space”. Generally, a time variation t is also used in some cases, but is omitted here. If the reference plane P is set at z=0, and disparity information of a light ray in the vertical direction, i.e., the degree of freedom in the &phgr; direction is omitted, the degree of freedom of the light ray care be regenerated to two dimensions (x, &thgr;). This x-&thgr; two-dimensional space is a partial space of the ray space. As shown in
FIG. 3
, if u=tan&thgr;, a light ray (
FIG. 2
) which passes through a point (X, Z) in the real space is mapped onto a line in the x-u space, said line is given by:
X=x+u·Z
  (1)
Image sensing by a camera reduces to receiving light rays that pass through the lens focal point of the camera by an image sensing surface, and converting their brightness levels and colors into an image. In other words, a light ray group which passes through one point,. i.e., the focal point position, in the real space is captured as an image in correspondence with the number of pixels. In this, since the degree of freedom in the &phgr; direction is omitted, and the behavior of a light ray is examined in only the X-Z plane, only pixels on a line segment that intersects a plane orthogonal with respect to the Y-axis need to be considered. In this manner, by sensing an image, light rays that pass through one point can be collected, and data on a single line segment in the x-u space can be captured by a single image sensing.
When this image sensing is done a large number of times by changing the viewpoint position (in this specification, the viewpoint position includes both the position of the viewpoint and the line-of-sight direction unless otherwise specified), light ray groups which pass through a large number of points can be captured. When the real space is sensed using N cameras, as shown in
FIG. 4
, data on a line given by:
X
n
=x+u·Z
n
  (2)
can be input in correspondence with a focal point position (X
n
, Z
n
) of the n-th camera (n=1, 2, . . . , N) as shown in FIG.
5
. In this way, when an image is sensed from a sufficiently large number of viewpoints, the x-u space can be densely filled with data.
Conversely, an observation image from a new arbitrary viewpoint position can be generated (
FIG. 7
) from the data of the x-u space (FIG.
6
). As shown in
FIG. 7
, an observation image from a new viewpoint position E(X, Z) indicated by an eye mark can be generated by reading out data on a line given by equation (1) from the x-u space.
Actually sensed image data like the aforementioned ray space data is compressed and stored in an external storage device or the like for each unit (e.g., for each object). Therefore, in order to render such space data in a virtual space, the data must be downloaded onto a main storage device, decoded, and rendered on the main storage device. On the other hand, the user can recognize a given virtual space only after virtual images of all virtual objects to be rendered in that virtual space are displayed. Therefore, when there are a plurality of objects to be rendered, the user cannot recognize such virtual objects until space data of all these objects are downloaded, decoded, and rendered. That is, when the user wants to walk through such virtual space, a rendering apparatus with poor response results due to excessive processing requirement.
This is the first problem of the prior art upon handling space data such as ray space data.
The second problem of the prior art results from the fact that actually sensed image data such as ray space data or the like contain a large volume of data. It is a common practice to store such data at a location separated from an image processing apparatus in the form of a database. For this reason, when the image processing apparatus maps a virtual image in a virtual space, a large volume of space data must be downloaded into the image processing apparatus, in advance. Owing to the huge size of actually sensed image data, the turn around time from when space data is requested until that space data is ready to be rendered in the image processing time is not so short, although the communication speed has improved recently. Under these circumstances, the user must be prevented from being bored during the wait time until actually sensed image data is ready for use in such system that presents a virtual space. That is, during this wait time a billboard image (a single image) with a short download time is displayed instead, although a scene from an arbitrary viewpoint position cannot be obtained.
The third problem of the prior art occurs when a walk-through system which allows the user to freely walk through a virtual space using actually sensed image data such as ray space data has a limited memory size. That is, in order to combat the aforementioned first problem, a technique for segmenting a virtual space into a plurality of subspaces (e.g., in case of a virtual art museum, each exhibition room forms one subspace) can be proposed.
More specifically, when it is detected that the user is about to approach a given exhibition room, only space data of that exhibition room is pre-fetched to shorten the time required for transfer prior to rendering. Furthermore, when the user is about to leave that exhibition room (subspace A), space data for the next subspace (e.g., exhibition room B) must be overwritten and stored on the memory area that stored space data of the previous exhibition room(subspace A). In this manner, virtual subspaces of exhibition rooms can be reproduced in turn in nearly real time even by a relatively small memory size.
This pre-fetch start timing is determined depending on whether or not the viewpoint position of the user approaches a target subspace. However, since the user's viewpoint position moves using a mouse or the like without any high-precision route guide, the user may often be guided to a wrong route. That is, when it is erroneously detected that the user's viewpoint position which does not reach a pre-fetch start zone has reached that zone, the system starts pre-fetch. Especially, when the user's viewpoint position moves near the pre-fetch start zone, such operation error readily occurs. For example, as

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Space rendering method, virtual space rendering apparatus,... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Space rendering method, virtual space rendering apparatus,..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Space rendering method, virtual space rendering apparatus,... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3287491

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.