Sequential multimodal input

Telecommunications – Radiotelephone system – Special service

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C455S414400, C455S563000, C704S270100, C704S235000, C704S275000, C709S217000, C709S218000, C709S219000

Reexamination Certificate

active

10705155

ABSTRACT:
A method of interacting with a client/server architecture with a 2G mobile phone is provided. The 2G phone includes a data channel for transmitting data and a voice channel for transmitting speech. The method includes receiving a web page from a web server pursuant to an application through the data channel and rendering the web page on the 2G phone. Speech is received from the user corresponding to at least one data field on the web page. A call is established from the 2G phone to a telephony server over the voice channel. The telephony server is remote from the 2G phone and is adapted to process speech. The telephony server obtains a speech-enabled web page from the web server corresponding to the web page provided to the 2G phone. Speech is transmitted from the 2G phone to the telephony server. The speech is processed in accordance with the speech-enabled web page to obtain textual data. The textual data is transmitted to the web server. The 2G phone obtains a new web page through the data channel and renders the new web page having the textual data.

REFERENCES:
patent: 6654722 (2003-11-01), Aldous et al.
patent: 7072328 (2006-07-01), Shen et al.
patent: 2003/0167172 (2003-09-01), Johnson et al.
patent: 2003/0224760 (2003-12-01), Day
patent: 2004/0172254 (2004-09-01), Sharma et al.
patent: 2004/0214555 (2004-10-01), Kumar et al.
patent: 2004/0220810 (2004-11-01), Leask et al.
patent: 2005/0021826 (2005-01-01), Kumar
patent: 2005/0026636 (2005-02-01), Yoon
patent: 2005/0204030 (2005-09-01), Koch et al.
patent: 2006/0041433 (2006-02-01), Slemmer et al.
patent: 2006/0106935 (2006-05-01), Balasuriya
patent: 2006/0168095 (2006-07-01), Sharma et al.
SandCherry Multimodal White Paper, Copyright 2003.
Stéphane H. Maes and Chummun Ferial, Multi-Modal Browser Architecture, Overview on the support of multi-modal browsers in 3GPP, 2002.
Multimodality: The Next Wave of Mobile Interaction, White Paper, Aug. 2003, pp. 1-8.
W3C Multimodal Interaction Requirements W3C Note Jan. 8, 2003, pp. 1-45.
Georg Niklfeld, Wiener Telekom-Tag '01 Speech and Language Processing for Telecom Applications, Nov. 15, 2001.
W3C Multimodal Interaction Framework, M3C NOTE May 06, 2003, pp. 1-24.
White Paper, Multimodality On Thin Clients, A Closer Look At Current Mobile Devices and The Multimodal Experience Possible Today, Sunil Kumar, 2003.
Multimodal Speech Technology-Realizing the Full Potential of Your People and Services, Microsoft, pp. 1-12, 2003.
Nikfeld,G., Finan, R., Pucher, M., Architecture for adaptive multimodal dialog systems based on VoiceXML, Eurospeech, 2001.
Component-based multimodal dialog interfaces for mobile knowledge creation, Annual Meeting of the ACL, Proceedings of the workshop on Human Language Technology and Knowledge Management, vol. 2001, Toulouse, France.
M.Baum, et al., Speech and Multimodal Dialogue Systems for Telephony Applications Based on a Speech Database of Austrian German, OGAI Journal,vol. 20,No. 1,pp. 29-34,Jan. 2001.
G. Niklfeld, et al., “Multimodal Interface Architecture for Mobile Data Services”,Proceedings of TCMC2001 Workshop on Wearable Computing,Graz,2001.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Sequential multimodal input does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Sequential multimodal input, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Sequential multimodal input will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3793454

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.