Internet system for virtual telepresence

Computer graphics processing and selective visual display system – Display driving control circuitry – Controlling the condition of display elements

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C345S419000, C345S420000, C345S424000

Reexamination Certificate

active

06573912

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Technical Field
The present invention relates to the efficient representation and communication of synthesized perspective -views of three-dimensional objects, and more specifically to paring down the number of Internet packets that must be sent in real-time at a network client's request to support interactive video sessions.
2. Description of the Prior Art
The limited bandwidth of Internet connections severely constrains the interactive real-time communication of graphical images, especially three-dimensional images. Ordinarily, dozens of video cameras would be trained on a single three-dimensional subject, each from a different perspective. A user could then pick one of the perspectives to view, or pick one that can be interpolated from several of the nearby perspectives. But sending all this information in parallel and computing all the interpolations that many users could request can overtax the server and its Internet pipe.
It is possible to represent solid objects with so-called “voxels”. These are the three-dimensional equivalents of pixels which are used to paint two-dimensional pictures. Each voxel has an x,y,z address in space, and a value that indicates whether the point is inside or outside the solid. The voxel map can be computed from the video images provided by a sufficient number of perspectives. The surface appearance of the solid can also be captured by each such camera. Interpolated intermediate images can be had by warping or morphing.
U.S. Pat. No. 5,613,048, issued to Chen and Williams describes a first approach for interpolating solid structures. An offset map is developed between two neighboring images from correspondence maps. Such Patent is incorporated herein by reference.
A second approach uses the structural information about an object. Voxel information is derived from the video images provided by several cameras. Depth maps can be calculated for each camera's viewpoint, and is obtained from correspondences between surface points, e.g., triangulation. Another technique involves using silhouettes in intersections. Once the voxels for a solid are determined, intermediate (virtual) views can be obtained from neighboring (real) views.
Prior art methods for the three-dimension reconstruction of remote environments consume enormous computational and communication resources, and require far too many sensors to be economically feasible. So real-time applications are practically impossible with conventional techniques for modeling and rendering abject appearance.
Recent advances at the “Virtualized Reality” laboratory at Carnegie Mellon University (CMU) demonstrate that real-time three-dimension shape reconstruction is possible. Video-based view generation algorithms can produce high-quality results, albeit with small geometric errors.
Research in three-dimension reconstruction of remote environments has shown that it is possible to recover both object appearance and sounds in remote environments. The methods far modeling abject appearance, however, consume enormous computational and communication resources, and require far too many sensors to be economically feasible. These traits make real-time applications nearly impossible without fundamental algorithmic improvements. We therefore focus our attention on techniques for modeling and rendering object appearance, which can loosely be divided into three groups: direct three-dimension, image-space, and video-based.
Direct methods of three-dimension reconstruction measure the time-of-flight or phase variations in active illumination reflected from the scene. These measurements are converted directly into measurements of three-dimension distances. Because of their reliance on active illumination, multiple sensors can not co-exist in the same environment. As a result, they are inappropriate for real-time three-dimension reconstruction of complete environments.
Image-space methods create a database of all possible rays emanating from every object that point in all directions. To generate a new image, all the rays that pass through the desired viewpoint are projected on a plane. See, A. Katayama, K. Tanaka, T. Oshino, and H. Tamura, “A Viewpoint Dependent Stereoscopic Display Using Interpolation Of Multi-viewpoint Images”, SPIE Proc. Vol. 2409: Stereoscopic Displays and Virtual Reality Systems II, p. 11-20, 1995. And see, M. Levoy and P. Hanrahan, “Light Field Rendering”, SIGGRAPH '96, August 1996. Also, S. J. Gortler, R. Grzeszczuk, R. Szeliski, and M. F. Cohen, “The Lumigraph”, SIGGRAPH '96, 1996. Such references are all examples of image-space methods, and all can produce high-quality images. However, these techniques require thousands of viewpoints, making them impractical for real-time event capture.
Video-based modeling and rendering methods explicitly create three-dimension model structures and use real video images as models of scene appearance. A three-dimension model structure is extracted from a set of video images. New views are generated by projecting the original video images onto a three-dimension model, which can then be projected into the desired viewpoint.
Images from two viewpoints can be used to estimate the three-dimension structure in image-based stereo reconstruction. Given the positions, orientations, and focal lengths of the cameras, correspondences are used to triangulate the three-dimension position of each point of the observed surface. The output is called a depth image or range image. Each pixel is described with a distance, e.g., rather than color. A recent survey of stereo algorithms is given by U. R. Dhond and J. K. Aggarwal, in “Structure From Stereo—A Review”, IEEE Trans. On Pattern Analysis and Machine Intelligence, pp. 1489-1510, 1989. While stereo methods can provide three-dimension structure estimates, they are so-far unable to produce high-quality, high-accuracy results an a consistent basis across a reasonable variation in scene content.
The recovery of complete three-dimension models of a scene required multiple range images. This is because a single range image includes a three-dimension structure only for the visible surfaces. At the “Virtualized Reality” laboratory at Carnegie Mellon University, the present inventor, Takeo Kanade has shown that formulating this problem as a volumetric reconstruction process yields high-quality, robust solutions even in the presence of the errors made in the stereo processes. See, P. W. Rander, P. J. Narayanan, and T. Kanade, “Recovery of Dynamic Scene Structure from Multiple Image Sequences”, Int'l Conf. On Multisensor Fusion and Integration for Intelligent Systems, 1996.
The volume containing the objects can be decomposed into small samples, e.g., voxels. Each voxel is then evaluated to determine whether it lies inside or outside the object. When neighboring voxels have different status (i.e., one inside and one outside), then the object surface must pass between them. Such property is used to extract the object surface, usually as a triangle mesh model, once all voxels have been evaluated. The technique is similar to integration techniques used with direct three-dimension measurement techniques with some modifications to improve its robustness to errors in the stereo-computed range images. See, Curless and M. Levoy, “A Volumetric Method for Building Complex Models from Range Images”, SIGGRAPH '96, 1996. And see, A. Hilton, A, J. Stoddart, J. Illingworth, and T. Windeatt, “Reliable Surface Reconstruction From Multiple Range Images”, Proceedings of ECCV '96, pp. 117-126, April 1996. Also, M. Wheeler, “Automatic Modeling and Localization for Object Recognition”, Ph.D. thesis, Carnegie Mellon University, 1996.
A principle limitation of these methods is the processing speed. For example, CMU clustered seventeen Intel Pentium II-based PC's and inter-connected them with a 10-base-T ETHERNET network, and still needed mare than 1000 seconds to process each second of video input.
Once a three-dimension structure is available, two methods can be used to generate arbitrary v

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Internet system for virtual telepresence does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Internet system for virtual telepresence, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Internet system for virtual telepresence will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3132147

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.