Apparatus for presenting mixed reality shared among operators

Computer graphics processing and selective visual display system – Image superposition by optical means – Operator body-mounted heads-up display

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C345S007000, C273S309000, C463S002000, C463S032000, C463S039000, C463S047500, C463S049000

Reexamination Certificate

active

06522312

ABSTRACT:

BACKGROUND OF THE INVENTION
The present invention relates to a mixed reality presentation apparatus for presenting to a user or operator mixed reality which couples a virtual image generated by computer graphics to the real space. The present invention also relates to an improvement of precise detection of, e.g., head position and/or posture of an operator to which mixed reality is presented.
In recent years, extensive studies have been made about mixed reality (to be abbreviated as “MR” hereinafter) directed to seamless coupling of a real space and virtual space. MR has earned widespread appeal as a technique for enhancing virtual reality (to be abbreviated as “VR” hereinafter) for the purpose of coexistence of the real space and the VR world that can be experienced in only a situation isolated from the real space.
Applications of MR are expected in new fields qualitatively different from VR used so far, such as a medical assistant use for presenting the state of the patient's body to a doctor as if it were seen through, a work assistant use for displaying the assembling steps of a product on actual parts in a factory, and the like.
These applications commonly require a technique of removing “deviations” between a real space and virtual space. The “deviations” can be classified into a positional deviation, time deviation, and qualitative deviation. Many attempts have been made to remove the positional deviation (i.e., alignment) as the most fundamental requirement among the above deviations.
In case of video-see-through type MR that superposes a virtual object on an image sensed by a video camera, the alignment problem reduces to accurate determination of the three-dimensional position of that video camera.
The alignment problem in case of optical-see-through type MR using a transparent HMD (Head Mount Display) amounts to determination of the three-dimensional position of the user's view point. As a method of measuring such position, a three-dimensional position-azimuth sensor such as a magnetic sensor, ultrasonic wave sensor, gyro, or the like is normally used. However, the precision of such sensors is not sufficient, and their errors produce positional deviations.
On the other hand, in the video-see-through system, a method of direct alignment on an image on the basis of image information without using such sensors may be used. With this method, since positional deviation can be directly processed, alignment can be precisely attained. However, this method suffers other problems, i.e., non-real-time processing, and poor reliability.
In recent years, attempts for realizing precise alignment by using both a position-azimuth sensor and image information since they compensate for each other's shortcomings have been reported.
As one attempt, “Dynamic Registration Correction in Video-Based-Augmented Reality Systems” (Bajura Michael and Ulrish Neuman, IEEE computer Graphics and Applications 15, 5, pp. 52-60, 1995) (to be referred to a first reference hereinafter) has proposed a method of correcting a positional deviation arising from magnetic sensor errors using image information in video-see-through MR.
Also, “Superior Augmented Reality Registration by Integrating Landmark Tracking and Magnetic Tracking” (State Andrei et al., Proc. of SIGGRAPH 96, pp. 429-438, 1996) (to be referred to as a second reference hereinafter) has proposed a method which further develops the above method, and compensates for ambiguity of position estimation based on image information. The second reference sets a landmark, the three-dimensional position of which is known, in a real space so as to remove any position deviation on an image caused by sensor errors when a video-see-through MR presentation system is built using only a position-azimuth sensor. This landmark serves as a yardstick for detecting the positional deviation from image information.
If the output from the position-azimuth sensor does not include any errors, a coordinate point (denoted as Ql) of the landmark actually observed on the image must agree with a predicted observation coordinate point (denoted as P
i
) of the landmark, which is calculated from the camera position obtained based on the sensor output, and the three-dimensional position of the landmark.
However, in practice, since the camera position obtained based on the sensor output is not accurate, Q
1
and P
1
do not agree with each other. The deviation between the predicted observation coordinate Q
1
and land mark coordinate P
1
represents the positional deviation between the landmark positions in the virtual and real spaces and, hence, the direction and magnitude of the deviation can be calculated by extracting the landmark position from the image.
In this way, by qualitatively measuring the positional deviation on the image, the camera position can be corrected to remove the positional deviation.
The simplest alignment method using both a position-azimuth sensor and image is correction of sensor errors using one point of landmark, and the first reference proposed a method of translating or rotating the camera position in accordance with the positional deviation of the landmark on the image.
FIG. 1
shows the basic concept of positional deviation correction using one point of landmark. In the following description, assume that the internal parameters of a camera are known, and an image is sensed by an ideal image sensing system free from any influences of distortion and the like.
Let C be the view point position of the camera, Q
I
be the observation coordinate position of a landmark on an image, and Q
C
be the landmark position in a real space. Then, the point Q
I
is present on a line l
Q
that connects the points C and Q
C
. On the other hand, from the camera position given by the position-azimuth sensor, a landmark position P
C
on the camera coordinate system, and its observation coordinate position P
I
on the image can be estimated. In the following description, v
1
and v
2
respectively represent three-dimensional vectors from the point C to the points Q
I
and P
I
. In this method, positional deviation is corrected by modifying relative positional information between the camera and object so that a corrected predicted observation coordinate position P′
I
of the landmark agrees with Q
I
(i.e., a corrected predicted landmark position P′
C
on the camera coordinate system is present on the line l
Q
).
A case will be examined below wherein the positional deviation of the landmark is corrected by rotating the camera position. This correction can be realized by modifying the position information of the camera so that the camera rotates an angle q that the two vectors v
1
and v
2
make with each other. In actual calculations, vectors v
1n
and v
2n
obtained by normalizing the above vectors v
1
and v
2
are used, their outer product v
1n
×v
2n
is used as the rotation axis, their inner product v
1n
·v
2n
is used as the rotation angle, and the camera is rotated about the point C.
A case will be examined below wherein the positional deviation of the landmark is corrected by relatively translating the camera position. This correction can be realized by translating the object position in the virtual world by v=n(v
1
−v
2
). Note that n is a scale factor defined by:
n


=


&LeftBracketingBar;
CP
C
&RightBracketingBar;
&LeftBracketingBar;
CP
I
&RightBracketingBar;
(
1
)
Note that |AB| is a symbol representing the distance between points A and B. Likewise, correction can be attained by modifying the position information of the camera so that the camera translates by −v. This is because this manipulation is equivalent to relative movement of a virtual object by v.
The above-mentioned two methods two-dimensionally adjust the positional deviation on the landmark but cannot correct the camera position to a three-dimensionally correct position. However, when sensor errors are small, these methods can expect sufficient effects, and the calculation cost required for correction is very smal

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Apparatus for presenting mixed reality shared among operators does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Apparatus for presenting mixed reality shared among operators, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Apparatus for presenting mixed reality shared among operators will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3166294

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.