Systems and methods for extracting meaning from multimodal...

Data processing: speech signal processing – linguistics – language – Speech signal processing – Recognition

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C704S270000, C704S275000, C704S231000, C704S251000, C704S256000, C382S187000, C382S190000, C382S198000, C382S228000

Reexamination Certificate

active

07069215

ABSTRACT:
Finite-state systems and methods allow multiple input streams to be parsed and integrated by a single finite-state device. These systems and methods not only address multimodal recognition, but are also able to encode semantics and syntax into a single finite-state device. The finite-state device provides models for recognizing multimodal inputs, such as speech and gesture, and composes the meaning content from the various input streams into a single semantic representation. Compared to conventional multimodal recognition systems, finite-state systems and methods allow for compensation among the various input streams. Finite-state systems and methods allow one input stream to dynamically alter a recognition model used for another input stream, and can reduce the computational complexity of multidimensional multimodal parsing. Finite-state devices provide a well-understood probabilistic framework for combining the probability distributions associated with the various input streams and for selecting among competing multimodal interpretations.

REFERENCES:
patent: 5502774 (1996-03-01), Bellegarda et al.
patent: 5600765 (1997-02-01), Ando et al.
patent: 5677993 (1997-10-01), Ohga et al.
patent: 5748974 (1998-05-01), Johnson
patent: 5884249 (1999-03-01), Namba et al.
patent: 6529863 (2003-03-01), Ball et al.
patent: 6665640 (2003-12-01), Bennett et al.
patent: 6735566 (2004-05-01), Brand
Sharma et al., (“Toward Multimodal Human-Computer Interface”, Proceedings of the IEEE, vol. 86, Issue 5, May 1998, pp. 853-869).
Chen et al., (“Gesture-Speech Based HMI for a Rehabilitation Robot”, Proceedings of the Southeastcon '96, “Bringing Together Education, Science and Technology”., Apr. 11-14, 1996, pp. 29-36).
Roy et al., (“Word Learning in a Multimodal Environment”, proceedings of the 1998 IEEE Conference on Acoustics, Speech, and Signal Processing, 1998, ICASSP'98, May 12-15, 1998 , vol. 6, pp. 3761-3764).
Salem et al., (“Current Trends In Multimodal Input Recognition”, IEE Colloquium on Virtual Reality Personal Mobile and Practical Applications—98/454 Oct. 28, 1998, pp. 3/1-3/6.
Kettebekov et al., (“Toward Multimodal Interpretation in a Natural Speech/Gesture Interface”, Proceedings 1999 International Conference on information and Intelligence Systems, pp. 328-335).
Bangalore, et al., Finite-State Multimodal Parsing and Understanding, Jul. 31, 2000—Aug. 4, 2000, International Conference of Computer Linguistics.
Bangalore, et al., Integrating Multimodal Language Processing with Speech Recognition, Oct. 2000, Proceeding of International Conference on Spoken Language Processing.
Johnston, et al., Unification-based Multimodal Integration, 1997, Proceeding of the 35th Annual Meeting of the Association for Compuatational Lingustics.
Johnson, M. Unification-based Multimodal Parsing, 1998, In Proceedings of the 17th Intl'l Conf. on Computational Linguistics & 36th Annual Meeting of Assoc. for Computational Linguistics, p. 624-630.
Johnson, M., Deixis and Conjunction in Multimodal Systems, 2000, Proceedings of Coling-2000.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Systems and methods for extracting meaning from multimodal... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Systems and methods for extracting meaning from multimodal..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Systems and methods for extracting meaning from multimodal... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3634748

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.