Method and system for stereo videoconferencing

Television – Two-way video and voice communication – Conferencing

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C348S014080, C348S014160

Reexamination Certificate

active

06583808

ABSTRACT:

TECHNICAL FIELD
The present invention relates to the fields of virtual reality and teleconferencing, and in particular to three-dimensional videoconferencing.
BACKGROUND OF THE INVENTION
Teleconferencing permits people in different geographical locations to communicate without the time, effort and expense of travelling to a meeting place. Most current videoconferencing systems use a single camera and a single monitor at each location. If there are more than two locations participating in the videoconference, the video display is generally divided into windows, and a video image from each location is displayed in each window.
Other more recent technologies include immersive video, in which a three-dimensional model of an environment is created and a viewer can move around within this virtual environment. Computer generated images are created to provide the viewer with a perspective view from a virtual spatial location within the virtual environment.
One such system is disclosed in U.S. Pat. No. 5,850,352, which issued Dec. 15, 1998 to Moezzi et al. The patent describes an immersive video system that synthesizes images of a real-world scene. The synthesized images are linked to a particular perspective on the scene or an object in the scene. A user can specify various views, including panoramic or stereoscopic. The system uses computerized video processing (called “hypermosaicing”) of multiple video perspectives on the scene. Multiple video cameras each at a different spatial location produce multiple two-dimensional (2D) video images of the scene. The system uses a video data analyzer for detecting and tracking scene objects and their locations, an environmental model builder for combining multiple scene images to build a three-dimensional (3D) dynamic model recording scene objects and their instant spatial locations. A visualizer generates one or more selectively synthesized 2D video image(s) of the scene using the 3D model and the viewing criterion.
Moezzi et al. require building a 3D dynamic model of an environment, and the people within the environment, from which stereo pairs are synthesized. Building 3D dynamic models of moving people and then synthesizing views of these models is computationally intensive and with currently available technology, can be prohibitively slow and expensive.
Another patent illustrating the state of the art is U.S. Pat. No. 5,999,208, which issued Dec. 7, 1999 to McNerney et al. The patent describes a virtual reality mixed media meeting room which provides a user with a visually familiar conference format. The various aspects of the virtual reality conference are presented in a rendering that emulates the physical appearance and presence of the physical participants and communication devices that would be present in a traditional conference room. Each conference participant is represented on a display by his/her image in a selected chair. Conference participants can share applications and jointly participate in the modification of presentations and displays, but the participants are not realistically represented and cannot move around within the meeting room.
There therefore exists a need for a method and system for stereo videoconferencing that can provide an immersive three-dimensional experience to permit meeting participants to interact with each other in a realistic way, while avoiding the computationally intensive process of computing participants' images using three-dimensional models.
SUMMARY OF THE INVENTION
It is therefore an object of the invention to provide a method for stereo videoconferencing that provides a realistic immersive three-dimensional environment for participants.
It is a further object of the invention to provide a system for stereo videoconferencing that efficiently uses bandwidth to support real-time seamless, immersive, three-dimensional videoconferencing.
The virtual meeting room system of the present invention is designed to create the illusion of immersion in a real meeting by recreating stereoscopic views of a virtual meeting from the viewpoint of a participant. Instead of creating dynamic 3D models of participants, the system only transmits stereo pairs of video images of each participant to each of the other participants.
In accordance with another aspect of the present invention, the system further comprises means for determining the position of a participant's head and hands to permit interaction with objects in the virtual environment.
In accordance with an aspect of the invention, there is provided a stereo videoconferencing system for at least two participants in at least two separate locations, comprising: means in each location for providing a reference point; means for sensing a position of each participant with respect to the reference point; means for capturing at least two video images of each participant, each video image being from a different perspective; means for computing a stereo pair of video images of each participant for each of the other participants using at least two video images and the respective position of each of the other participants; means for communicating the respective stereo pairs of video images of each participant to each of the other participants; and means for assembling a stereo video display image for each of the participants, using the position data and the stereo pairs of video images.
In accordance with another aspect of the present invention, there is provided a method for stereo videoconferencing system for at least two participants in at least two separate locations, comprising steps of: providing a reference point at each location; sensing a position of each participant with respect to the reference point; capturing at least two video images of each participant, each video image being from a different perspective; computing a stereo pair of video images of each participant for each of the other participants; communicating the respective stereo pairs of video images of each participant to each of the other participants; and assembling a stereo video display image for each of the participants, using the position data and the stereo pairs of video images.


REFERENCES:
patent: 5495576 (1996-02-01), Ritchey
patent: 5850352 (1998-12-01), Moezzi et al.
patent: 5999208 (1999-12-01), McNerney et al.
patent: 6198484 (2001-03-01), Kameyama
patent: 6263100 (2001-07-01), Oshino et al.
patent: 406113336 (1994-04-01), None
patent: 406351013 (1994-12-01), None
patent: 10084539 (1998-03-01), None
Seitz, Steve et al., “View Morphing”, Online publication downloaded on Jun. 28, 2001.
“Real-Time Tracking of the Human Body”, Online publication downloaded on Jun. 28, 2001.
Selected pages downloaded from the website of Dresden 3D on Jun. 29, 2001.
Selected page downloaded from the website of Fastgraph on Jul. 5, 2001.
Selected pages downloaded from the website of LinCom on Jul. 12, 2001.
Selected page downloaded from the website of Stereoscopy on Jul. 12, 2001.
Selected pages downloaded from the website of Studio 3D on Jul. 12, 2001.
Selected pages downloaded from the website of Ascension Technology Corporation on Jul. 13, 2000.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Method and system for stereo videoconferencing does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Method and system for stereo videoconferencing, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method and system for stereo videoconferencing will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3131350

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.