Technique for implementing an on-demand display widget...

Computer graphics processing and selective visual display system – Display driving control circuitry – Controlling the condition of display elements

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C345S156000, C345S215000, C345S639000, C345S592000

Reexamination Certificate

active

06333753

ABSTRACT:

BACKGROUND OF THE DISCLOSURE
1. Field of the Invention
The invention relates to a technique, specifically apparatus and accompanying methods, for implementing an on-demand “Tool Glass” based desktop user interface. In particular, by sensing whether a user is explicitly touching an input pointing device, a Tool Glass sheet is automatically either displayed or dismissed preferably through a controlled fade in/fade out operation. This technique is particularly, though not exclusively, suited for use in conjunction with such an interface that accepts two-handed user input. Furthermore and advantageously, through the present invention, touch sensing can readily be used to provide “on-demand” display and dismissal, again with preferably controlled fading, of substantially any display widget, e.g., a toolbar, based on sensed contact between a hand of a user and a corresponding touch sensitive input device.
2. Description of the Prior Art
A continuing challenge in the field of computation is to develop an interface that, to the extent possible, facilitates and simplifies interaction between a user and a computer and thus enhances an experience which that user then has with his(her) computer, i.e., a so-called “user experience”.
Over the past several years or so, a computer mouse and a keyboard collectively operating with a graphical user interface have become a rather ubiquitous user interface (UI). Through such a UI, a user seeing a graphical display, such as an operating system desktop or a window of an application, positions a cursor on a display screen by directly moving the mouse in two dimensions across a suitable surface. Movement of the cursor simply mimics the movement of the mouse. Various buttons are located on the top of the mouse to enable the user to cause a mouse “click” (button depression) whenever (s)he appropriately positions the cursor at a desired location on the display, such as, e.g.: over a desired icon in a toolbar; over a selection in a pull-down menu; or, within an application window itself, over a selected position in a document. Appropriate software, in, e.g., an application, interprets the “click”, in a contextual setting, governed by the cursor location and a then current state of the application, as a particular command and then suitably performs that command.
With this interface, a preferred (dominant) hand of a user, typically a right hand for right-handed individuals, manipulates the mouse, while a non-preferred (non-dominant) hand, a left hand for these individuals, may either manipulate the keyboard or not. Commands can be entered either through the mouse or the keyboard, with the particular use of either device being governed by the current state of, e.g., the application. Through such an interface, the use of the keyboard and mouse are staggered in time. The user manipulates one device, often with one hand, and then manipulates the other, often with the same and/or different hand, but does not manipulate both devices at the same time. Hence, such a conventional interface is commonly referred to as being “one-handed”.
Unfortunately, the interaction afforded by a conventional one-handed (keyboard-mouse) UI has simply not kept pace with the tasks which many computer users seek to perform through that interface. In essence, a practical limit has been reached as to the complexity of tasks which a user can readily accomplish through such an interface.
Specifically, as users seek to perform increasingly sophisticated tasks through an application program, they are becoming increasingly frustrated owing to the practical limitations inherent in a conventional one-handed UI. In that regard, a significant number of mouse clicks and/or other mouse and keyboard manipulations is often required to accomplish various tasks through that conventional UI. This, in turn, can impose an cognitive burden on a user, which, for repetitive operations, can be appreciable and rather fatiguing. In particular, conventional graphical user interfaces often position command menus, icons and other user-actuable (“clickable”) visual objects along one or more edges on a display screen and peripherally located to a centrally displayed application area. Often, these icons and objects are organized into one or more toolbars and/or other visual groupings (the visual objects, icons, toolbars and other groupings are all commonly referred to as UI “widgets”). Frequently, the program permits the user to appropriately set a software switch(es), through, e.g., a dialog box of “option” settings, that explicitly displays or dismisses any or all of the toolbars and other groupings in an attempt to reduce screen clutter. By dismissing such widgets, added display space can be allocated to displaying application information in lieu of widgets.
However, given the peripheral location of the widgets on the display screen, then, to invoke a desired operation, the user must generally direct his(her) focus of attention back and forth between the application area on the screen and the peripheral “widget” area and correspondingly move the mouse between the two to separately select command(s) and operands. Disadvantageously, this constant shift of attention mentally tires the user as (s)he is forced to repetitively “re-acquire” his(her) current context, and these mouse movements increase user task time; hence collectively depreciating the “user experience”. These drawbacks are exacerbated as the display size increases for a constant display resolution.
In an effort to circumvent these drawbacks inherent in a conventional one-handed UI, the art teaches the use of two-handed UI and particularly one that manipulates a so-called “Tool Glass” widget (which, for simplicity, will simply be referred to as a “Tool Glass”).
First, human beings cooperatively utilize both of their hands to accomplish a wide variety of manual tasks, often with little or no accompanying cognitive effort. Doing so simply expedites those tasks, such as typing (where fingers of both hands are used in tandem to depress different keys on a keyboard) or manually writing (where one hand positions the paper and the other simultaneously manipulates a pen) or even tapping a nail into a piece of wood (where one hand holds the nail in place while the other lightly swings a hammer to hit a head of the nail), over what would otherwise be required to accomplish those tasks through use of a single hand. Alternatively, other tasks (such as mounting a spare tire on an automobile or handling another bulky object) could not be readily performed at all but for the use of two hands (or at least a suitable physical substitute for one hand).
As early as the mid-1980s, the art of computer interfaces teaches that in accomplishing a compound task, a one-handed computer interface is generally inferior to use of a two-handed interface, and particularly such a two-handed interface which splits the task into sub-tasks that, in turn, could be performed by a user through parallel and coordinated movement of both of his(her) hands. In that regard, see W. Buxton et al, “A Study in Two-Handed Input”,
Proceedings of CHI'
86
, Boston, Mass., Apr.
13-17, 1986, pages 321-326 (hereinafter the “Buxton” paper). The Buxton paper teaches the use of an experimental two-handed user interface in which a preferred (dominant) hand (e.g., a right hand for a right-handed person) manipulates, in an absolute positioning mode, one input device, here a moveable digitizer (commonly referred to as a “puck”) across a graphics tablet, while a non-preferred (non-dominant) hand (e.g., a left hand for the same person) simultaneously manipulates a second input device, here a so-called “slider”. The slider allows one-dimensional input with an input amount being proportional to an amount through which a user moves a track on the slider up or down. Once various test subjects were trained to use these devices, the author of the Buxton paper observed that, through coordinated movement of both devices in parallel, users were able to markedly reduce the time needed to perform various compound user i

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Technique for implementing an on-demand display widget... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Technique for implementing an on-demand display widget..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Technique for implementing an on-demand display widget... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2559167

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.