Multi-user real-time augmented reality system and method

Computer graphics processing and selective visual display system – Computer graphics processing – Graph generating

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C348S143000

Reexamination Certificate

active

06317127

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention generally relates to interactive image viewing systems, and more specifically to an image viewing system that broadcasts images, augmented with point-of-interest overlays, over a video bus to multiple local users, which simultaneously extract and display different viewpoints of the images in real time with a uniform angular resolution over all viewer orientations.
2. Description of the Related Art
Known interactive image viewing systems allow a single person to view his surroundings through an external camera where those surroundings are either remotely located or otherwise occluded from view. Typically, the person wears a head mounted display (HMD), which is fitted with a tracking device that tracks the movement of the person's head. The HMD displays the portion of the surroundings that corresponds to the person's current viewpoint, i.e. head position.
Fakespace, Inc. produces a teleoperated motion platform, the MOLLY™, which is used for telepresence and remote sensing applications. The user wears a HMD to view his surroundings. An external stereo camera setup is slaved to the user's head movement so that the cameras track his head movement and provide the desired view of the surroundings. The user's field-of-view (FOV) is approximately 350°, of which the user can view approximately 80° at any one time using Fakespaces's BOOM head-coupled display. By slaving camera movement to head position, the displayed image is optimal in the sense that it is the same as if the user turned and looked in that direction absent any occlusion. As a result, the angular resolution of the displayed image is the same at any viewing orientation. However, because the external camera is slaved to a particular user's head movement, the system is limited to a single user and does not support multiple views simultaneously
In a more general application of the slaved camera if the camera is remotely located from the user, the time delay between head movement and image presentation is distracting. If the time delay exceeds a certain limit the user will be unable to experience realistic sensations of interacting with the imagery, may lose his sense of orientation, and in the worst case, may even experience motion sickness.
To address the time delay problem, Hirose et al developed a “Virtual Dome” system, which is described in “Transmission of Realistic Sensation: Development of a Virtual Dome” IEEE VRAIS pp. 125-131, January 1993. In the Virtual Dome system, a single external camera pans and tilts to capture a complete image of the surrounding area. The component images are transmitted to a graphics workstation that texture maps them onto a set of polygons that construct a spherical dome, which in turn provides a virtual screen. By using the HMD, the user can experience remote synthetic sensations by looking around inside the virtual screen.
Due to the intense complexity of the texture mapping process, the user requires a graphics workstation to construct the virtual screen, the image resolution is quite low, and the response to head movements is not real-time. To improve response time, once the system has generated an entire image the camera is then slaved to the user's head movement. As a result, when the user changes his head orientation, he will initially see previously obtained images for only a few seconds until the texture mapping process can recover. However, by slaving the camera motion to the user the system is limited to a single user.
A related technology is embodied in “Virtual Reality” systems, as described in “Virtual Reality: Scientific and Technological Challenges” National Research Council, Nathaniel Durlach and Anne Mavor, Eds, National Academy Press, 1995, p. 17-23. In Virtual Reality systems a single user interacts with computer generated imagery. The user's actions change not only his view of the imagery but the imagery itself.
SUMMARY OF THE INVENTION
The present invention provides a high resolution real-time multi-user augmented reality system in which the angular resolution of the displayed video signal is constant over the range of possible viewing orientations.
This is accomplished by assuming that each user's position is fixed at the center of a virtual sphere, the image sources' orientations are known relative to the center of the virtual sphere, and that the users are looking at a portion of the inner surface of the sphere. As a result, the “flat” images generated by the image sources and the “flat” images viewed by the users can be efficiently mapped onto the virtual sphere and represented as index pairs (d,n). Thus, each user extracts those pixels on the virtual sphere corresponding to the user's current field-of-view (FOV) and remaps them to a flat display. The video signals can be augmented with synthetic point-of-interest data such as visual overlays or audio messages that are registered to the video. The mapping directly provides the desired uniform angular resolution as well as distortion correction, which would normally be computed independently. In addition, the simplicity of the mapping itself and the resulting representation on the virtual sphere support high resolution video, real-time response and multiple users.
In a preferred embodiment, an airplane is fitted with a wide FOV sensor system that provides a plurality of video signals at known positions and orientations in a reference coordinate system ({right arrow over (x)},{right arrow over (y)},{right arrow over (z)}), a global positioning system (GPS) that provides the airplane's position and heading, and a point-of-interest data base. The video signals are each mapped onto a virtual sphere in the reference coordinate system such that each sensor pixel (s,x
s
,y
s
,i), where s identifies the sensor, (x
s
,y
s
) are the pixel coordinates, and i is the pixel intensity, is represented as a triplet (d,n,i) where (d,n) is an indexed pair that specifies the spherical coordinates on the virtual sphere at a fixed angular resolution &agr;. The triplets are continuously broadcast over a video bus to the passengers on the airplane. Similarly, the visual and audio overlay data in the system's wide FOV is extracted from the point-of-interest data base, mapped into (d,n) index pairs and broadcast over an overlay bus.
Each passenger has a HMD that tracks the movement of the passenger's head and displays a “flat” subimage of the broadcast video signal. A local processor responds to the head movement by mapping the subimage pixels to the corresponding portion of the virtual sphere to identify the range of (d,n) values encompassed in the passenger's local FOV. Thereafter, the local processor downloads those triplets (d,n,i) in the local FOV, remaps them to the passenger's image plane (x
u
,y
u
,i) pixel coordinates, and transmits them to the HMD for display. Alternatively, the (s,x
s
,y
s
,i) pixels can be broadcast over the video bus with the local processor performing both spherical mappings. This increases the number of computations but allows the angular resolution of the displayed subimage to adapt to each passenger's focal length. Furthermore, because the overlay data has a relatively low information content relative to the video data, each passenger could interact directly with the point-of-interest data base to retrieve only the desired overlays in the current local FOV.
For a better understanding of the invention, and to show how the same may be carried into effect, reference will now be made, by way of example, to the accompanying drawings.


REFERENCES:
patent: 5384588 (1995-01-01), Martin et al.
patent: 5422653 (1995-06-01), Maguire, Jr.
patent: 5488675 (1996-01-01), Hanna
patent: 5748194 (1998-05-01), Chen
patent: 5841439 (1998-11-01), Pose et al.
patent: 5905499 (1999-05-01), McDowall et al.
patent: 0 592 652 A1 (1994-04-01), None
patent: 0 696 018 A2 (1996-02-01), None
patent: 2 289 820 (1995-11-01), None
patent: WO 96/21321 (1996-07-01), None
Warren Robin

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Multi-user real-time augmented reality system and method does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Multi-user real-time augmented reality system and method, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Multi-user real-time augmented reality system and method will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2601306

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.