Data processing: speech signal processing – linguistics – language – Speech signal processing – Recognition
Reexamination Certificate
2007-11-13
2007-11-13
{hacek over (S)}mits, Talivaldis Ivars (Department: 2626)
Data processing: speech signal processing, linguistics, language
Speech signal processing
Recognition
C704S251000, C382S187000, C382S228000
Reexamination Certificate
active
10970215
ABSTRACT:
Multimodal utterances contain a number of different modes. These modes can include speech, gestures, and pen, haptic, and gaze inputs, and the like. This invention use recognition results from one or more of these modes to provide compensation to the recognition process of one or more other ones of these modes. In various exemplary embodiments, a multimodal recognition system inputs one or more recognition lattices from one or more of these modes, and generates one or more models to be used by one or more mode recognizers to recognize the one or more other modes. In one exemplary embodiment, a gesture recognizer inputs a gesture input and outputs a gesture recognition lattice to a multimodal parser. The multimodal parser generates a language model and outputs it to an automatic speech recognition system, which uses the received language model to recognize the speech input that corresponds to the recognized gesture input.
REFERENCES:
patent: 5502774 (1996-03-01), Bellegarda et al.
patent: 5600765 (1997-02-01), Ando et al.
patent: 5677993 (1997-10-01), Ohga et al.
patent: 5748974 (1998-05-01), Johnson
patent: 5781663 (1998-07-01), Sakaguchi et al.
patent: 5855000 (1998-12-01), Waibel et al.
patent: 5884249 (1999-03-01), Namba et al.
patent: 6167376 (2000-12-01), Ditzik
patent: 6438523 (2002-08-01), Oberteuffer et al.
patent: 6529863 (2003-03-01), Ball et al.
patent: 6665640 (2003-12-01), Bennett et al.
patent: 6735566 (2004-05-01), Brand
patent: 6823308 (2004-11-01), Keiller et al.
patent: 7069215 (2006-06-01), Bangalore et al.
Sharma, et al., “Toward Multimodal Human-Computer Interface”, Proc. of the IEEE, vol. 86, Issue 5, May 1998, pp. 853-869.
Chen, et al., “Gesture-Speech Based HMI for a Rehabilitation Robot,” Proc. of the Southeastern '96, “Bringing Together Edu., Science & Tech.”, Apr. 11-14, 1996, pp. 29-36.
Roy, et al., Word Learning in a Multimodal Environment, Proc. of the '98 IEEE Conf. on Acoustics Speech & Signal Processing, ICASSP'98, May 12-15, 1998, vol. 6, pp. 3761-3764.
Salem, et al., Current Trends in Multimodal Input Recognition, IEE Colliquium on Virtual Reality Personal Mobile & Practical Applications—98/454 Oct. 28, 1998, pp. 3/1-3/6.
Kettebekov, et al., “Toward Multimodal Interpretation in a Natural Speech/Gesture Interface”, Proc. 1999 Int'l Conf. on Information and Intelligence Systems, pp. 328-335.
Bangalore Srinivas
Johnston Michael J.
LandOfFree
Systems and methods for extracting meaning from multimodal... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Systems and methods for extracting meaning from multimodal..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Systems and methods for extracting meaning from multimodal... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3824077