User interface apparatus and operation range presenting method

Computer graphics processing and selective visual display system – Display driving control circuitry – Controlling the condition of display elements

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C345S156000, C382S190000

Reexamination Certificate

active

06266061

ABSTRACT:

BACKGROUND OF THE INVENTION
This application is based on Japanese Patent Applications No. 9-9496 filed on Jan. 22, 1997 and No. 9-9773 filed on Jan. 22, 1997, the contents of which are cited herein by reference.
The present invention relates to a user interface apparatus and an input method of performing input by image processing.
A mouse is overwhelmingly used as a computer input device. However, operations performable by using a mouse are, e.g., cursor movement and menu selection, so a mouse is merely a two-dimensional pointing device. Since information which can be handled by a mouse is two-dimensional information, it is difficult to select an object with a depth, e.g., an object in a three-dimensional space. Also, in the formation of animation, it is difficult for an input device such as a mouse to add natural motions to characters.
To compensate for the difficulties of pointing in a three-dimensional space, several apparatuses have been developed. Examples are an apparatus for inputting information in six-axis directions by pushing and rolling a ball in a desired direction and apparatuses called a data glove, a data suit, and a cyber glove which are fitted on a hand or the like. Unfortunately, these apparatuses are presently less popular than they were initially expected because of their poor operability.
On the other hand, a direct indicating type input apparatus has been recently developed by which a user can input intended information by gesture without handling any special equipment.
For example, light is irradiated, reflected light from the hand of a user is received, and an image of the received light is formed to perform fine extraction or shape recognition processing, thereby executing control in accordance with the shape of the hand, moving a cursor in accordance with the moving amount of the hands or changing the visual point in a three-dimensional model.
Alternatively, the motion of the hand of a user is videotaped, and processes similar to those described above are performed by analyzing the video image.
By the use of these apparatuses, a user can easily perform input by gesture without attaching any special equipment.
In these apparatuses, however, various modes such as a cursor move mode, a select mode, and a double click mode are fixedly used. To change the mode, therefore, a user must perform an explicit operation of changing the mode, resulting in an additional operation load on the user.
Also, in these apparatuses, a light-receiving device for detecting an object is fixedly installed. This limits the range within which the hand of a user or the like can be correctly detected. Accordingly, depending on the position of the hand of a user or the like, the shape or the motion of the hand or the like cannot be accurately detected. The result is the inability to realize control or the like desired by the user. Additionally, it is difficult for the user to immediately recognize the above-mentioned detectable range in a three-dimensional space. Therefore, the user must learn operations in the detectable range from experience. This also results in an additional operation load on the user.
BRIEF SUMMARY OF THE INVENTION
It is an object of the present invention to provide a user interface apparatus for performing input by image processing, which reduces the operation load on a user and is easier to use, and an instruction input method.
It is another object of the present invention to provide a user interface apparatus for performing an input operation by image processing, which reduces the operation load on a user and is easier to use, and an operation range presenting method.
To achieve the above objects, according to the first aspect of the present invention, a user interface apparatus comprises: means for cutting out an image to be processed from an input image and performing image processing; and means for switching a mode for performing pointing and other modes on the basis of a result of the image processing of the input image.
According to the second aspect of the present invention, a user interface apparatus comprises: means for cutting out an image to be processed from an input image and performing image processing; and means for switching at least a cursor move mode, a select mode, and a double click mode on the basis of a result of the image processing of the input image.
Preferably, the apparatus further comprises means for designating a recognition method (recognition engine) of limiting image processing contents for each object selectable in the select mode, wherein the image processing of the input image is performed for a selected object in accordance with a recognition method designated for the object.
Preferably, the apparatus further comprises means for designating a recognition method (recognition engine) of limiting image processing contents for each object selectable in the select mode, and means for presenting, near a displayed object indicated by a cursor, information indicating a recognition method designated for the object.
Preferably, the apparatus further comprises means for presenting the result of the image processing of the input image in a predetermined shape on a cursor.
According to still another aspect of the present invention, a user interface apparatus comprises a first device for inputting a reflected image, and a second device for performing input by image processing of an input image, wherein the second device comprises means for designating a recognition method (recognition engine) of limiting contents of image processing of an input image with respect to the first device, and the first device comprises means for performing predetermined image processing on the basis of the designated recognition method, and means for sending back the input image and a result of the image processing to the second device.
Preferably, the first device may further comprise means for requesting the second device to transfer information necessary for image processing suited to a necessary recognition method, if the first device does not have image processing means (recognition engine) suited to the recognition method, and the second device may further comprise means for transferring the requested information to the first device.
Preferably, each of the first and second devices may further comprise means for requesting, when information necessary for image processing suited to a predetermined recognition method in the device is activated first, the other device to deactivate identical information, and means for deactivating information necessary for image processing suited to a predetermined recognition method when requested to deactivate the information by the other device.
According to still another aspect of the present invention, an instruction input method comprises the steps of performing image processing for an input image of an object, and switching a mode for performing pointing and other modes on the basis of a result of the image processing.
According to still another aspect of the present invention, an instruction input method using a user interface apparatus including a first device for inputting a reflected image and a second device for performing input by image processing of an input image comprises the steps of allowing the second device to designate a recognition method (recognition engine) of limiting contents of image processing of an input image with respect to the first device, and allowing the first device to perform predetermined image processing on the basis of the designated recognition method and send back the input image and a result of the image processing to the second device.
The present invention obviates the need for an explicit operation performed by a user to switch modes such as a cursor move mode, a select mode, and a double click mode.
Also, the present invention eliminates the need for calibration done by an operation by a user because a point designated by the user is read by recognition processing and reflected on, e.g., cursor movement on the screen.
Furthermore, the input accuracy and the user operability

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

User interface apparatus and operation range presenting method does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with User interface apparatus and operation range presenting method, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and User interface apparatus and operation range presenting method will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2565585

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.