Computer graphics processing and selective visual display system – Display peripheral interface input device – Including keyboard
Reexamination Certificate
2001-09-07
2004-03-23
Saras, Steven (Department: 2675)
Computer graphics processing and selective visual display system
Display peripheral interface input device
Including keyboard
C345S156000, C345S158000, C345S169000, C345S173000
Reexamination Certificate
active
06710770
ABSTRACT:
FIELD OF THE INVENTION
The invention relates generally to sensing proximity of a stylus or user finger relative to a device to input or transfer commands and/or data to a system, and more particularly to such sensing relative to a virtual device used to input or transfer commands and/or data and/or other information to a system.
BACKGROUND OF THE INVENTION
It is often desirable to use virtual input devices to input commands and/or data and/or transfer other information to electronic systems, for example a computer system, a musical instrument, even telephones. For example, although computers can now be implemented in almost pocket-size, inputting data or commands on a mini-keyboard can be time consuming and error prone. While many cellular telephones can today handle e-mail communication, actually inputting messages using the small telephone touch pad can be difficult.
For example, a PDA has much of the functionality of a computer but suffers from a tiny or non-existent keyboard. If a system could be used to determine when a user's fingers or stylus contacted a virtual keyboard, and what fingers contacted what virtual keys thereon, the output of the system could perhaps be input to the PDA in lieu of keyboard information. (The terms “finger” or “fingers”, and “stylus” are used interchangeably herein.) In this example a virtual keyboard might be a piece of paper, perhaps that unfolds to the size of a keyboard, with keys printed thereon, to guide the user's hands. It is understood that the virtual keyboard or other input device is simply a work surface and has no sensors or mechanical or electronic components. The paper and keys would not actually input information, but the interaction or interface between the user's fingers and portions of the paper, or if not paper, portions of a work surface, whereon keys would exist, could be used to input information to the PDA. A similar virtual device and system might be useful to input e-mail to a cellular telephone. A virtual piano-type keyboard might be used to play a real musical instrument. The challenge is how to detect or sense where the user's fingers or a stylus are relative to the virtual device.
U.S. Pat. No. 5,767,848 to Korth (1998) entitled “Method and Device For Optical Input of Commands or Data” attempts to implement virtual devices using a two-dimensional TV video camera. Such optical systems rely upon luminance data and require a stable source of ambient light, but unfortunately luminance data can confuse an imaging system. For example, a user's finger in the image foreground may be indistinguishable from regions of the background. Further, shadows and other image-blocking phenomena resulting from a user's hands obstructing the virtual device would seem to make implementing a Korth system somewhat imprecise in operation. Korth would also require examination of the contour of a user's fingers, finger position relative to the virtual device, and a determination of finger movement.
U.S. Pat. No. ______ to Bamji et al. (2001) entitled “CMOS-Compatible Three-Dimensional Image Sensor IC”, application serial No. 09/406,059, filed Sep. 22, 1999, discloses a sophisticated three-dimensional imaging system usable with virtual devices to input commands and data to electronic systems. In that patent, various range finding systems were disclosed, which systems could be used to determine the interface between a user's fingertip and a virtual input device, e.g., a keyboard. Imaging was determined in three-dimensions using time-of-flight measurements. A light source emitted optical energy towards a target object, e.g., a virtual device, and energy reflected by portions of the object within the imaging path was detected by an array of photodiodes. Using various sophisticated techniques, the actual time-of-flight between emission of the optical energy and its detection by the photodiode array was determined. This measurement permitted calculating the vector distance to the point on the target object in three-dimensions, e.g., (x,y,z). The described system examined reflected emitted energy, and could function without ambient light. If for example the target object were a layout of a computer keyboard, perhaps a piece of paper with printed keys thereon, the system could determine which user finger touched what portion of the target, e.g., which virtual key, in what order. Of course the piece of paper would be optional and would be used to guide the user's fingers.
Three-dimensional data obtained with the Bamji invention could be software-processed to localize user fingers as they come in contact with a touch surface, e.g., a virtual input device. The software could identify finger contact with a location on the surface as a request to input a keyboard event to an application executed by an associated electronic device or system (e.g., a computer, PDA, cell phone, Kiosk device, point of sale device, etc.). While the Bamji system worked and could be used to input commands and/or data to a computer system using three-dimensional imaging to analyze the interface of a user's fingers and a virtual input device, a less complex and perhaps less sophisticated system is desirable. Like the Bamji system, such new system should be relatively inexpensive to mass produce and should consume relatively little operating power such that battery operation is feasible.
The present invention provides such a system.
SUMMARY OF THE PRESENT INVENTION
The present invention localizes interaction between a user finger or stylus and a passive touch surface (e.g., virtual input device), defined above a work surface, using planar quasi-three-dimensional sensing. Quasi-three-dimensional sensing implies that determination of an interaction point can be made essentially in three dimensions, using as a reference a two-dimensional surface that is arbitrarily oriented in three-dimensional space. Once a touch has been detected, the invention localizes the touch region to determine where on a virtual input device the touching occurred, and what data or command keystroke, corresponding to the localized region that was touched, is to be generated in response to the touch. Alternatively, the virtual input device might include a virtual mouse or trackball. In such an embodiment, the present invention would detect and report coordinates of the point of contact with the virtual input device, which coordinates would be coupled to an application, perhaps to move a cursor on a display (in a virtual mouse or trackball implementation) and/or to lay so-called digital ink for a drawing or writing application (virtual pen or stylus implementation). In the various embodiments, triangulation analysis methods preferably are used to determine where user-object “contact” with the virtual input device occurs.
In a so-called structured-light embodiment, the invention includes a first optical system (OS
1
) that generates a plane of optical energy defining a fan-beam of beam angle &phgr; parallel to and a small stand-off distance &Dgr;Y above the work surface whereon the virtual input device may be defined. In this embodiment, the plane of interest is the plane of light produced by OS
1
, typically a laser or LED light generator. The two parallel planes may typically be horizontal, but they may be disposed vertically or at any other angle that may be convenient. The invention further includes a second optical system (OS
2
) that is responsive to optical energy of the same wavelength as emitted by OS
1
. Preferably OS
2
is disposed above OS
1
and angled with offset &thgr;, relative to the fan-beam plane, toward the region where the virtual input device is defined. OS
2
is responsive to energy emitted by OS
1
, but the wavelength of the optical energy need not be visible to humans.
The invention may also be implemented using non-structured-light configurations that may be active or passive. In a passive triangulation embodiment, OS
1
is a camera rather than an active source of optical energy, and OS
2
is a camera responsive to the same optical ene
Rafii Abbas
Tomasi Carlo
Canesta, Inc.
Kaufman Michael A.
Nelson Alecia D.
LandOfFree
Quasi-three-dimensional method and apparatus to detect and... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Quasi-three-dimensional method and apparatus to detect and..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Quasi-three-dimensional method and apparatus to detect and... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3229009