Enabling voice control of voice-controlled apparatus using a...

Data processing: speech signal processing – linguistics – language – Speech signal processing – Application

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C704S275000, C382S190000, C382S103000

Reexamination Certificate

active

06970824

ABSTRACT:
Voice-controlled apparatus is provided which minimises the risk of activating more than one such apparatus at a time where multiple voice-controlled apparatus exist in close proximity. To start voice control of the apparatus, a user needs to be looking at the apparatus when speaking. Preferably, after the user stops looking at the apparatus, continuing voice control can only be effected whilst the user continues speaking without breaks longer than a predetermined duration. Detection of whether the user is looking at the apparatus can be effected in a number of ways including by the use of camera systems, by a head-mounted directional transmitter, and by detecting the location and direction of facing of the user.

REFERENCES:
patent: 5086385 (1992-02-01), Launey et al.
patent: 5637849 (1997-06-01), Wang et al.
patent: 5677834 (1997-10-01), Mooneyham
patent: 5682030 (1997-10-01), Kubon
patent: 5867817 (1999-02-01), Catallo et al.
patent: 5991726 (1999-11-01), Immarco et al.
patent: 6012102 (2000-01-01), Shachar
patent: 6111580 (2000-08-01), Kazama et al.
patent: 6230138 (2001-05-01), Everhart
patent: 6243683 (2001-06-01), Peters
patent: 6629642 (2003-10-01), Swartz et al.
patent: 6754373 (2004-06-01), de Cuetos et al.
patent: 6795806 (2004-09-01), Lewis et al.
patent: 0 591 044 (1994-04-01), None
patent: 0 718 823 (1996-06-01), None
patent: 1 056 036 (2000-11-01), None
patent: 2 237 410 (1991-05-01), None
patent: 2 326 490 (1998-12-01), None
patent: 00/41065 (2000-07-01), None
Rowley, H.A., et al., “Human Face Detection in Visual Scenes,”Carnegie Mellon Computer Science Technical Report CMU-CS-95-158R, 26 pages (Nov. 1995).
Heinzmann, J. and A. Zelinsky, “3-D Facial Pose and Gaze Point Estimation using a Robust Real-Time Tracking Paradigm,”IEEE International Conference on Automatic Face&Gesture Recognition, Nara, Japan, 6 pages (Apr. 1998).
Pappu, R. and P.A. Beardsley, “A Qualitative Approach to Classifying Gaze Direction,”IEEE International Conference on Automatic Face&Gesture Recognition, Nara, Japan, 6 pages (Apr. 1998).
Newman, R., et al., “Real-Time Stereo Tracking for Head Pose and Gaze Estimation,”IEEE International Conference on Automatic Face&Gesture Recognition, Grenoble, France, pp. 122-128 (Mar. 2000).
Moghaddam, B., et al., “Beyond Eigenfaces: Probabilistic Matching for Face Recognition,”IEEE International Conference on Automatic Face & Gesture Recognition, Nara, Japan, 6 pages (Apr. 1998).
Moghaddam, B. and A. Pentland, “Probabilistic Visual Learning for Object Representation,”IEEE Transactions on Pattern Ananlysis and Machine Intelligence, vol. 19, No. 7, pp. 696-710 (Jul. 1997).
Moghaddam, B., et al., “A Bayesian Similarity Measure for Direct Image Matching,”IEEE International Conference on Pattern Recognition, Vienna, Austria, pp. 350-358 (Aug. 1996).
Moghaddam, B., et al., “Bayesian Face Recognition using Deformable Intensity Surfaces,”IEEE Conference on Computer Vision & Pattern Recognition, San Francisco, CA, 7 pages (Jun. 1996).
Darrell, T., et al., “Active Face Tracking and Pose Estimation in an Interactive Room,”IEEE Conference on Computer Vision & Pattern Recognition, San Francisco, Ca, 16 pages (Jun. 1996).
Nastar, C., et al., “Generalized Image Matching: Statistical Learning of Physically-Based Deformations,”Fourth European Conference on Computer Vision, Cambridge, UK, 6 pages (Apr. 1996).
Moghaddam, B. and A. Pentland, “Probabilistic Visual Learning for Object Detection,”Fifth International Conference on Computer Vision, Cambridge, MA, 8 pages (Jun. 1995).
Moghaddam, B. and A. Pentland, “A Subspace Method for Maximum Likelihood Target Detection,”IEEE International Conference on Image Processing, Washington D.C., 4 pages (Oct. 1995).
Moghaddam, B. and A. Pentland, “An Automatic System for Model-Based Coding of Faces,”IEEE Data Compression Conference, Snowbird, Utah, 5 pages (Mar. 1995).
Pentland, A., et al., “View-Based and Modular Eigenspaces for Face Recognition,”IEEE Conference on Computer Vision & Pattern Recognition, Seattle, WA, 7 pages (Jul. 1994).
Okubo, M. and T. Watanabe, “Lip Motion Capture and Its Application to 3-D Molding,”IEEE International Conference on Automatic Face and Gesture Recognition, Nara, Japan, pp. 187-192 (Apr. 1998).
Luthon, F. and M. Lievin, “Lip Motion Automatic Detection,”Tenth Scandinavian Conference on Image Analysis, Lappeenratra, Finland, 3 pages (Jun. 1997).
Grant, K.W. and P. -F.. Seitz, “The use of visible speech cues for improving auditory detection of spoken sentences,”Journal of the Acoustical Society of America, vol. 108, No. 3, pp. 1197-1208 (Sep. 2000).
Merchant, S. and T. Schnell, “Applying Eye Tracking as an Alternative Approach for Activation of Controls and Functions in Aircraft,”IEEE 19th Digital Avionics Systems Conference, vol. 2, pp. 5.A.5/1-9 (Oct. 2000).
Myers, D.R., et al., “Robust Video Object Recognition and Pose Determination Using Passive Target Labels”,Cooperative Intelligent Robotics in Space III, SPIE vol. 1829, pp. 2-12 (1992).

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Enabling voice control of voice-controlled apparatus using a... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Enabling voice control of voice-controlled apparatus using a..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Enabling voice control of voice-controlled apparatus using a... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3452284

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.