Image storage method, image rendering method, image storage...

Computer graphics processing and selective visual display system – Computer graphics processing – Adjusting level of detail

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

Reexamination Certificate

active

06774898

ABSTRACT:

FIELD OF THE INVENTION
The present invention relates to a method of storing a virtual image from space data generated based on actually captured data, a method of rendering a virtual space on the basis of stored space data, and apparatuses therefor and, more particularly, to progressive storage and rendering of the space data.
The present invention also relates to a virtual image rendering method that downloads large-size image data such as ray space data (to be abbreviated as RS data hereinafter), and presents the downloaded data to the user, and a rendering apparatus.
BACKGROUND OF THE INVENTION
Many schemes that describe and express a virtual space on the basis of actually captured images in place of a description based on three-dimensional geometric models have been proposed. Such schemes are called Image Based Rendering (to be abbreviated as IBR hereinafter), and are characterized in that they can express a virtual space with high reality, which cannot be obtained by a scheme based on three-dimensional geometric models, since these schemes are based on actually captured images.
Attempts to describe a virtual space on the basis of a ray space theory as one of IBR schemes have been proposed. See, for example, “Implementation of Virtual Environment by Mixing CG model and Ray Space Data”, IEICE Journal D-11, Vol. J80-D-11 No. 11, pp. 3048-3057, November 1997, or “Mutual Conversion between Hologram and Ray Space Aiming at 3D Integrated Image Communication”, 3D Image Conference, and the like.
The ray space theory will be explained below.
As shown in
FIG. 1
, a coordinate system
0
-X-Y-Z is defined in a real space. A light ray that passes through a reference plane P (Z=z) perpendicular to the Z-axis is defined by a position (x, y) where the light ray crosses P, and variables &thgr; and &phgr; that indicate the direction of the light ray. More specifically, a single light ray is uniquely defined by five variables (x, y, z, &thgr;, &phgr;). If a function that represents the light intensity of this light ray is defined as f, light ray group data in this space can be expressed by f(x, y, z, &thgr;, &phgr;). This five-dimensional space is called a “ray space”. Generally, a time variation t is also used in some cases, but is omitted here.
If the reference plane P is set at z=0, and disparity information of a light ray in the vertical direction, i.e., the degree of freedom in the &phgr; direction is omitted, the degree of freedom of the light ray can be regenerated to two dimensions (x, &thgr;). This x-&thgr; two-dimensional space is a partial space of the ray space. As shown in
FIG. 3
, if u=tan &thgr;, a light ray (
FIG. 2
) which passes through a point (X, Z) in the real space is mapped onto a line in the x-u space, which line is given by:
X=x+u·Z
  (1)
Image sensing by a camera reduces to receiving light rays that pass through the lens focal point of the camera by an image sensing surface, and converting their brightness levels and colors into an image. In other words, a light ray group which passes through one point, i.e., the focal point position, in the real space is captured as an image in correspondence with the number of pixels. In this way, since the degree of freedom in the &phgr; direction is omitted, and the behavior of a light ray is examined in only the X-Z plane, only pixels on a line segment that intersects a plane perpendicular to the Y-axis need only be considered. In this manner, by sensing an image, light rays that pass through one point can be collected, and data on a single line segment in the x-u space can be captured by single image sensing.
When this image sensing is done a large number of times by changing the viewpoint position (in this specification, the viewpoint position includes both the position of the viewpoint and the line-of-sight direction unless otherwise specified), light ray groups which pass through a large number of points can be captured. When the real space is captured using N cameras, as shown in
FIG. 4
, data on a line given by:
x+Z
n
u=X
n
  (2)
can be input in correspondence with a focal point position (X
n
, Z
n
) of the n-th camera (n=1, 2, . . . , N), as shown in FIG.
5
. In this way, when an image is captured from a sufficiently large number of viewpoints, the x-u space can be densely filled with data.
Conversely, an observation image from a new arbitrary viewpoint position can be generated (
FIG. 7
) from the data of the x-u space (FIG.
6
). As shown in
FIG. 7
, an observation image from a new viewpoint position E(X, Z) indicated by an eye mark can be generated by reading out data on a line given by equation (1) from the x-u space.
However, the above prior art makes arithmetic operations for converting all pixels of an actually captured image into ray space groups. That is, if there are E actually captured images each having m×n pixels, the pixels are converted into light ray groups via E×m×n computations, resulting in a very large computation volume. Especially, when a ray space group is mapped in the ray space to maintain the resolution of an input image and the RS data is quantized, the quantized data size also becomes huge.
It is an object of the present invention to provide a RS data storage method and apparatus, which allow progressive display of RS data of a target object.
It is another object of the present invention to provide a method and apparatus for displaying progressively stored space data.
Actually captured image data like the aforementioned RS data is compressed and stored in an external storage device or the like for each unit (e.g., for each object). Therefore, in order to render such space data in a virtual space, the data must be downloaded onto a main storage device, decoded, and rendered on the main storage device. On the other hand, the user can recognize a given virtual space only after virtual images of all virtual objects to be rendered in that virtual space are displayed. Therefore, when there are a plurality of objects to be rendered, the user cannot recognize such virtual objects until space data of all these objects are downloaded, decoded, and rendered. That is, when the user wants to walk through such virtual space, a rendering apparatus with poor response is provided.
This is the second problem of the prior art upon handling space data such as RS data.
The third problem of the prior art results from the fact that actually captured image data such as RS data or the like contain a large volume of data. It is a common practice to store such data at a location separated from an image processing apparatus in the form of a database. For this reason, when the image processing apparatus maps a virtual image in a virtual space, a large volume of space data must be downloaded into the image processing apparatus in advance. Owing to the huge size of actually captured image data, the turn around time from when space data is requested until that space data is ready to be rendered in the image processing time is not so short, although the communication speed is improving recently. Under these circumstances, the user must be prevented from being bored during the wait time until actually captured image data is ready for use in such system that presents a virtual space to him or her. That is, during this wait time, a billboard image (a single image) with a short download time is displayed instead, although a scene from an arbitrary viewpoint position cannot be obtained.
The fourth problem of the prior art occurs when a walk-through system which allows the user to freely walk through a virtual space using actually captured image data such as RS data has a limited memory size. That is, in order to combat the aforementioned second problem, a technique for segmenting a virtual space into a plurality of subspaces (e.g., in case of a virtual art museum, each exhibition room forms one subspace) can be proposed.
More specifically, when it is detected that the user is about to approach a given exhibition room, space data of that

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Image storage method, image rendering method, image storage... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Image storage method, image rendering method, image storage..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Image storage method, image rendering method, image storage... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3331931

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.