Computer graphics processing and selective visual display system – Display driving control circuitry – Controlling the condition of display elements
Reexamination Certificate
1997-03-31
2002-06-18
Jankus, Almis R. (Department: 2671)
Computer graphics processing and selective visual display system
Display driving control circuitry
Controlling the condition of display elements
Reexamination Certificate
active
06407762
ABSTRACT:
BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates generally to a virtual reality interface. More specifically, the present invention pertains to a method and apparatus for interacting with a virtual reality application.
2. Description of related art
Virtual reality has come to have many different definitions. One useful definition is “virtual reality is the delivery to a human of the most convincing illusion possible that they are in another reality.” D. Harrison & M. Jaques,
Experiments in Virtual Reality
, p. 2 (Butterworth-Heinemann, 1996). This virtual reality is located in digital electronic form in the memory of a computer. Thus, virtual reality is another way for humans to interact with a computer, for example, visually and/or by manipulating an object in the virtual space defined by the virtual reality.
Several methods currently exist that allow one to visualize, hear and/or navigate and/ or manipulate objects in a virtual world or space. A virtual reality user has three main experiences in a virtual reality world: manipulation, navigation and immersion. Manipulation is defined as the ability to reach out, touch and move objects in the virtual world. Navigation is defined as the ability to move about and explore the virtual world. Id. at
8
. Immersion is about completely enclosing the user so that the user perceives that he/she is actually in the virtual world.
Immersion is usually accomplished with the use of a head-mounted display (HMD) that provides visual signals to the user, as well as audio and tactile signals. HMDs suffer from several disadvantages. First, HMDs are cumbersome to use. Second, the HMD user can become motion sick.
Projected reality is another option to immersion. In projected (virtual) reality, the user sees him/herself projected into the action appearing on the screen. Projected reality uses several methods to interface between the user and the computer. For example, data gloves may be used for immersion as well as for projected reality. When the user wears the data glove, the user's hand movements are communicated to the computer so that the user may, for example, move his/her hand into the graphic representation of a virtual object and manipulate it.
Unfortunately, data gloves suffer from several disadvantages. First, there is often a delay between the user moving the data glove and then seeing the user's virtual hand movement on the display. Secondly, to use the gloves successfully, electromechanical sensors on the data gloves often require constant recalibration. Third, affordable data gloves that accurately translate the user's hand movements into virtual hand movements in the virtual space are not currently available. Finally, data gloves and HMDs may be bothersome for a user to wear and to use.
A mouse is another interface that has been used to interact with a three-dimensional (3-D) display. Clicking on the mouse controls icons or graphical user interfaces that then control the movement of a virtual object. This is illustrated in
FIG. 1A
in which a prior art World Wide Web browser
100
is shown. A three-dimensional virtual object
103
is displayed on a three-dimensional plane
105
. Three graphical user interfaces,
109
,
111
and
113
are used to control the movements of the virtual 3-D object
103
and the virtual 3-D circular plane
105
. The virtual object
103
and virtual plane
105
, however, move as a single unit. The user uses a mouse to click on graphical user interface
109
to move the virtual object
103
and the virtual plane
105
toward the user and/or away from the user. If the user wants to move virtual object
103
and virtual plane
105
up or down, then the user clicks on and moves graphical user interface
111
accordingly. The user clicks onto graphical user interface
113
to rotate the virtual object
103
and the virtual plane
105
. The user has difficulty with simultaneously translating and rotating object
103
. Moreover, it is difficult for the user to translate the movements of the mouse to control graphical user interfaces
109
,
111
and
113
. Thus, there is no direct linear correlation between the user's supplied information via the mouse and the resulting motion on the graphical user interfaces
109
-
111
and
113
, and the ultimate movement of virtual object
103
and virtual plane
105
.
FIG. 1B
illustrates the situation where the user has clicked on to graphical user interface
113
to slightly rotate virtual object
103
and virtual plane. Instead virtual object
103
and virtual plane
105
are over-rotated so that they are partially off the display of the Web Browser
100
. Thus, the user has difficulty with accurately predicting and controlling the movement of 3-D virtual objects. In addition, the user has difficulty with simultaneously rotating and moving object
103
up or down, or towards or away from the user. Thus, the user has difficulty with fully controlling any particular virtual object using the currently available input/output devices. Furthermore the user has difficulty with simultaneously combining more than two of the possible six degrees of freedom.
Three translations and three rotations are the six different degrees of freedom in which an object may move. An object may move forward or backward (X axis), up or down (Y axis) and left or right (Z axis). These three movements are collectively known as translations. In addition, objects may rotate about any of these principle axes. These three rotations are called roll (rotation about the X axis), yaw (rotation about the Y axis) and pitch (rotation about the Z axis).
Currently, a keyboard or a mouse are the most commonly available input devices that interact with certain 3-D virtual applications, such as three-dimensional Web browsers. The keyboard and mouse usually allow only horizontal and vertical movements. A keyboard and a mouse do not allow a user to navigate through a three-dimensional virtual space utilizing the six degrees of freedom. In addition, a keyboard and a mouse do not allow accurate manipulation of a virtual object. Thus, no input/output device exists for accurately mapping a user's six degrees of freedom of movement into a 3-D virtual reality application.
Therefore, it is desirable to have an affordable non-invasive interface between a user and a virtual space that allows the user to manipulate objects and to navigate through the virtual space with six degrees of freedom in a nonsequential manner.
SUMMARY
A computer-implemented method for operating in a virtual space is described. The method comprises the following steps. A visual detection device is used to determine whether a predetermined set of data signals exist in user movement data signals. It is determined if the predetermined set of data signals has changed. The changed predetermined set of data signals is provided to a virtual reality application program that is generating the virtual space. The predetermined set of data signals is used by the virtual reality application program to perform an action in the virtual space.
REFERENCES:
Rheingold “Virtual Reality” pp 112-128, 1991.*
D. Harrison & M. Jacques,Experiments in Virtual Reality, p. 2 (Butterworth-Heineman, 1996).
J. Segen and S. Pingali, “A Camera-Based System for Tracking People in Real Time,” pp. 63-67, 13th International Conference on Pattern Recognition (Aug. 25-29, 1996).
J.L. Crowley et al., “Vision for Man Machine Interaction,”European Computer Vision Network(1993).
Ashok Samal and Prasana A. Iyengar, “Automatic Recognition and Analysis of Human Faces and Facial Expressions: A Survey,”Pattern Recognition, vol. 25, No. 1, pp. 65-77, 1992.
R. Foltyniewicz, “Automatic Face Recognition via Wavelets and Mathematical Morphology,” pp. 13-17, 13th International Conference on Pattern Recognition (Aug. 25-29, 1996).
G.W. Fitzmaurice et al., “Bricks: Laying the Foundations for Graspable User Interfaces,” CHI '95 Proceedings Papers.
Dean Rubine, “Specifying Gestures by Example,” Information Technology Center, Carnegie Mellon Unive
Blakely , Sokoloff, Taylor & Zafman LLP
Intel Corporation
Jankus Almis R.
LandOfFree
Camera-based interface to a virtual reality application does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Camera-based interface to a virtual reality application, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Camera-based interface to a virtual reality application will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-2978553